A lot of upper midscale hotels in the USA are owned by Indians from India whose last name is Patel.
Pretty much every single hotel gets a call from Mr. Patel at night asking to wire money due to an emergency. A lot of hotel employees fell for it and wire money. These employees even drill open the safe. Some even wire money from their personal account.
This scam is mostly social engineering without any AI/Deepfake. It's going to be a fun time ahead for everyone.
I think it's kinda weird to expect a source any time someone online shares an anecdote. It was sunny in New York yesterday, do you need a source for that?
At any rate, a 5s Google of "patel hotel indian scam call site:www.reddit.com" reveals dozens of threads of people sharing essentially the same anecdote. Is that enough for you sir, or do you need something peer reviewed?
I too like it when people link to things to back up a statement.
But the way he described the phenomenon made it seem pretty common, and indeed, a quick search for "patel cfo scam hotels" turns up a number of relevant results... Seems like it's a pretty well known, frequently occurring event in the hotel industry.
Nope. Real examples. I don't think there is any upper mid scale hotel that haven't received this calls. If you follow hotel industry groups, you will see this happening every week. This and IT support calls.
I think "power distance" (a cultural thing - both national culture and corporate culture) might play a role here. In some cultures, you do whatever the big boss asks you to do, regardless of procedure.
(Media reporting suggests this can also be true at some US hardware tech companies).
Having worked with Chinese people, let me tell you this is 100% accurate. It may (and probably will) happen in western countries as well, but the culture makes China, South Korea, Taiwan and Japan extremely vulnerable to this. No one I worked with was willing to refute, question or even raise any doubts if someone they perceive not at their level or, even better, below, was in the call.
Other countries are known for a culture of nothing ever happening without a piece of paper carrying an official-looking stamp. Those are laughably insecure, but the culture could easily be ported to public key signature. "Boss voice is only boss voice when it comes with a digitally signed transcript" shouldn't be too hard to introduce in "don't ever question your boss" cultures I think? Bosses might even enjoy the grandeur of showing off their status with an insignia-device. "Orders without proof of identity are irresponsibly bad form" could be surprisingly easy to establish.
I think that you are unfamiliar with these cultures. In Japan, you would never ask the voice that sounds like the boss to prove his identity with a digitally signed transcript - even if that's a fireable offense. It is so culturally alien to them that it would never get through.
That's not true though because most decision and execution processes in Japan are daisy chained. One person can't just make you send a tonne of money because you'd normally have to forward it onto someone else, who clears it with someone else, and then we all sign a ringisho.
The daisy chaining prevents single responsibility stuff like this.
Also for what it's worth I've done verification callbacks to every single one of my bosses at some point during my career here and no-one's ever questioned it.
> The term of "ringi" has two meanings. The first meaning being of "rin", 'submitting a proposal to one's supervisors and receiving their approval,' and "gi" meaning 'deliberations and decisions.' Corporate policy is not clearly defined by the executive leadership of a Japanese company. Rather, the managers at all levels below executives must raise decisions to the next level except for routine decisions. The process of "ringi decision-making" is conducted through a document called a "ringisho".
I take it that you were not raised in Japan? Do you agree that someone raised in Japan would have a harder time questioning their boss?
I'm no expert, I've never been to the land of the rising sun. This is what people have told me of their time there. Your input is very much appreciated.
I think their idea is that the boss would be the one to introduce it ("Bosses might even enjoy the grandeur of showing off their status with an insignia-device.") and because of the culture it wouldn't be difficult for employees to adapt and go along with those new rules.
I think the point is that this works until the "boss" says they need X right now and can't provide digital proof because it's not working for Y reason. Do the employees say no? That's the real test.
The problem is that inevitably the boss will forget his signature one day. Who is going to challenge him? And if he his challenged, how will he take it?
Even in the West, nobody of low seniority challenges the C-level executive when they tailgate or walk around without their badge. And if you are new, if there is an important looking individual you don't recognise, you leave him alone, totally validating the "act as you belong adage".
I was quite annoyed - disappointed too - during security induction (Australian NSA). They explicitly said we should challenge anyone not wearing a badge, but then joked that we should learn the department heads first so we don’t accidentally confront the “wrong” person.
Exactly the wrong message to send, particularly for an agency that’s supposedly an expert on security.
A good example of the challenges of real-life hardening. Anecdotes like this are a valuable addition to any discussion of security I think. I perfectly understand what's wrong about the attitude transferred in the joke, yet I can easily see myself being the person sending that wrong message. Very educational!
This is a thing that already happens in Japan, where physical personal and company seals (inkan) are regularly used for all sorts of documents and transactions that would get signed in the West. But they've evolved protocols to ensure they're secured and stored, which is why this rarely causes problems in real life.
In practice, there is little if any difference between seals and signatures in tems of security.
A signature (or stamp) is easy to fake and get away with for a while. It's very rare that the authenticity of signatures is checked right away. Perhaps even easier than stealing or faking a not-particularly-secured stamp. It only happens when some problem arises and is investigated after the fact. The question is not whether the signature is "authentic enough" but who signed the document. You can aks and answer this question about a seal equally well.
The reason we have signatures (or stamps) is as an explicit ritual signifying ratification of a document that one cannot plausibly deny later.
Also for a country thats so technologically advanced, Japan loves paperwork. They have reams of paperwork that you are expected to furnish for something as simple as registering an office move from a building in one part of town to another building in another. Its mindboggling just how entrenched bureaucracies get if you give them an inch of room to play.
> It is so culturally alien to them that it would never get through.
If there's a need then this will change. You might as well say that they'd never use a telephone because it's culturally alien. It was alien, but it was useful, so they adapted. Same with email and video calls. The boss has to log into their banking just like everyone else, because there's a need for it. If there's a need for this, the OP's suggestion seems like a pretty good one, as it augments the existing culture with a security step.
I think the way it would work is that the boss himself would send the signature somehow (e.g. on teams) and bosses that don't want their businesses to fall victim would have to ensure that their employees would never accept a call from them outside of the system that allowed the signature check.
What if that voice comes packed in bad clothing, smelly and is full of grammar mistakes? Because that's how an order without credentials would feel like, when the rituals of signature verification are established as expected form. The correct reply to an underling who goes all sir-yes-sir without checking would be "do you consider me so unimportant that you don't think it worth your precious time to verify that I actually am who I claim to be?". It would certainly have to be a cultural adaption initiated from the top. If subordinates are expected to fill in for whatever diligence those higher up lack, it won't work, no matter where.
It's true, I don't know Japan, but I suspect that they might have it much easier to adapt than western pretend-buddy orgs.
Not really. In those cultures, just like in armies everywhere, authority comes from the boss/commander and they can override their previous instructions at any point (and many do at some point).
There’s often a distinction in armies between “illegal command” issued by a commander, which one has to obey (or risk disciplinary action) and “blatantly illegal command” which must be disobeyed. An example of the former would be “keep your post for 20 hours straight” (where regulations limit a shift to e.g. 12 hours). An example of the latter would be “cut the limbs off other members of your platoon”.
An army setting is a much better model of some cultures. They are not as bad, but if taken ad absurdum they would look like an army setting.
Armies are a good example of authority clearly coming from the role, not from the person, and roles inseparably tied to a lot of expected formal behavior. In an impersonation-hardened army (or corporation), you simply wouldn't make it beyond the very lowest rank unless you demonstrate flawless authentication on both receiver and sender side. Just like you wouldn't make it badly dressed.
Except .. there’s no effective authentication protocol in place to verify that a specific person is indeed in a specific role (or is indeed person they claim to be).
Sure, that's the gap that needs to be filled. In the age of deep fakes even more urgently than before.
But which work cultures will find it easier to effectively deploy countermeasures?
Informal ones, where everybody acts like first names buddies all the way to the CEO, where they believe they are invulnerable because all those pretend-equal underlings are invited to speak up when they sense something fishy? What if they don't sense anything?
Or formal environments, where authentication tools could be systematically added to the preexisting and deeply entrenched set of rituals?
"That guy is not just acting like a colonel, the device we now all have to hold while saluting confirms that the biometric checksum embedded in his uniform insignia matches and is signed with central command keys". Yes, that protocol is not in place, but if it was introduced it would actually work. Now try the same in an informal environment where everything is supposed to be solved through good personal relations. The exact same tools, deployed in a buddy-org, would only ever get used retroactively, for pushing blame down the hierarchy.
I actually think it will be the informal cultures who will have an easier time integrating it.
Because pockets of “Sir, I recognize you are my boss but you still need to do this properly through the regular channels” are, in my experience, more common in a non-authoritative setting then in an authoritative one (my familiarity is mostly with armies, not with Asian societies).
And if these bubbles do exist, I think it is easier for them to expand in a disorganized, distributed manner; unlike an authoritative society where everything like this must properly flow top down.
Heh, "pockets of correct behavior" is an interesting perspective, truly reads like an insider view, I certainly would not have put it like that.
The problem in the informal culture is that insisting on formality (the authentication check can never not be a formality) is perceived as a signal "they don't like me". That's a huge incentive for cutting corners, both up and down the hierarchy. In an environment that prides itself in formality, it's at least possible to sell going through the motions as a sign of respect. The failure mode I'm talking about is not what's happening the day the boss doesn't have their keys (that's challenging in any environment, and certainly not easy on the authoritative end of the spectrum), but how likely it is that the absent keys would even come up, how often a check will actually happen. Lack of procedures is the defining quality of informal organizations.
When it's routine, the orderly refusal is not so much "but you have to do this properly" (underling ordering boss around) but "you know that I can't do that without.." (underling showing off being a good underling)
That attitude will have to change, there's no way around it. These live deepfakes will be as easy to create as a word document in no more than five years and maybe less than two.
The Fukushima nuclear accident was not enough to change this culture.
And it is far too easy to state that a foreign culture needs to change. The Japanese could say that American or Western culture needs to change, just for example with the glorification of violent criminals in media.
The Fukushima nuclear accident had a far narrower impact than this will have. The Fukushima accident did not result in a 'push button to gain money' GitHub software projects.
If an angry video call from the boss is all that is required to exfiltrate millions of dollars, and boss video calls become as easy to produce as spam emails, then the exfiltration of funds from Japanese organizations becomes as fast as approximately (spam email send rate) * (millions of dollars).
When you have received the 7th angry call from the boss that day, demanding funds be sent immediately, you eventually realize you need a different system. At a minimum the boss will need to come be angry in person.
Yeah with the advent of good deep fakes were at the point where everyone having their own private key is a must for all communication that's not face to face.
That would work in Australia. In most places in the US and Britain as well. I can imagine Israelis calling the real boss and tell them “I don’t believe it’s really you” and refuse to do it.
But that’s not how it works in authoritative cultures.
Maybe it will teach authoritative cultures to be less authoritative, and to allow people to question authority. Because it's going to cost you money if they don't.
I don't know, most professionals in HK I have met are pretty open to challenging people but maybe that's just the people I choose to work with.
In any case what I find strange is that usually HK finance companies (like much of the rest of the world) will have some kind of maker-checker system which prevents individual mistakes like this.
I met 3 top managers from China (talking about very high-level managers) to whom I had to talk to and were kinda more challenging and open than the other Chinese I had to work with, but nowhere near the Germans, Americans or Italians.
Having dated an abnormal amount of mainlanders, One thing that I always found weird was the amount of "rule following". Top down directives that you must do.
I can only imagine this being leveraged nefariously.
Back in my days building custom software (in the US/Canada) a lot of the PM work was figuring out how the process overrides worked. Every organization has a set of formal rules... and the way things actually work (and 50% of my job was making sure our CRUD apps that were more than just spreadsheets with changelogs).
But having lived & worked in a few countries now, the way other cultures do their overrides is always more visible (e.g. Country A you might pay bribes to get out of tickets, country B might just not pull people over in nice cars)
It's funny how when something happens in Asia some commenters always say it's because of the culture.
Sure there might be cultural differences, but maybe this guy is just careless.
There was a case in the US where someone pretended to be a cop, called a fast food restaurant, and actually convinced the manager to strip search an employee.
I guess this is also a case of cultural power distance.
This is why a Korean Air flight crashed at some point. The copilot knew something was wrong but the pilot was a lot more senior than him and in Korean culture it's normal to defer to your elders (according to the checklist manifesto). The cockpit recording showed what happened and it directly contributed to efforts to standardize crew resource management training. Other incidents like a flight out of Morocco where an older male pilot disregarded the concerns of his female copilot and crashed the plane have reinforced the need for CRM, especially for pilots from cultures where people may be ignored for social reasons.
From the article this was an employee in Hong Kong on a video call with people supposedly in the UK.
Power distance might matter, depending on nationality of participants.
Also if English is a second language, then perhaps the sound quality of the synthetic voices wouldn't need to be as good - we are surely better at recognising voices in our mother tongue.
Scammers have fooled countless mothers into believing their voice belongs to one of their children before text-to-speach was a thing. (Just to say it's not incredibly hard. I'm not suggesting that being able to automate it wouldn't have a huge impact.)
In some ways the west is still remarkably feudal but to the direct chain of managers not just directly to your “liege lord”. I regularly see people say no to big bosses who are outside the direct management even if they have high ranks.
> “(In the) multi-person video conference, it turns out that everyone [he saw] was fake,”
This could be totally real, but also could one employee saying 'the CFO was on a call' and claim deepfake to make it an excuse?
I guess it was a matter of time before this occurred. How long before scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP.
The even better question, is how can this be stopped or reduced and is there a new business there?
> scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP
Especially when a high percentage of people post their face and voice on social media. I find this especially crazy in the age of AI. I trained a Stable Diffusion LORA with photos of a friend and showed it to them (with permission) and they were completely shocked. Showed it to one of their friends and they were fooled for at least a minute and took some careful looks to find discrepancies
The reality is that if you speak at a conference there's a decent chance there's video of that on YouTube. If you have any sort of public presence as part of your job, your voice and likeness are probably out there whether you put it out yourself or not.
Keeping yourself anonymous isn't compatible with a lot of even moderately senior-level jobs out there.
CFOs of public companies typically do quarterly earnings conference calls with Wall St. So there's potentially plenty of recordings of their voices using the same kinds of language that it would take to fake something like this.
One of the tradeoffs you make as you move up the ladder is that you increasingly can't be an anonymous person. That may be a good tradeoff or bad depending upon your perspective.
You would think that executives would clone their own voices for the earnings call script readings like a lot of video essay YouTubers do now. But no, they still use terrible conference call systems for earnings calls rather than decent microphones that would be used in a podcast. That could actually be a silver lining here when it comes to creating quality training data.
I'd guess that approximately 0% of moderately senior level jobs involve ever speaking at a conference or other fairly public and recorded venue. Company-internal training videos or recorded meetings are more common, but that's a far narrower attack surface.
go to the YouTube channels of companies like AWS, Azure, GCP --they publish 10 to 30 minute videos of various employees, from product managers to architects etc doing explainers on various topics, products and services they offer.
More generally --the billions of hours and growing of audio video on YouTube, TikTok, and other platforms -- is literally someone in real life (most cases), likely some employee, that could be or become a middle manager somewhere.
>go to the YouTube channels of companies like AWS, Azure, GCP --they publish 10 to 30 minute videos of various employees, from product managers to architects etc doing explainers on various topics
1. Most companies don't do that.
2. AWS, for example, has what, 100k employees? What percentage of them are actually featured in those videos?
>More generally --the billions of hours and growing of audio video on YouTube, TikTok, and other platforms -- is literally someone in real life (most cases), likely some employee, that could be or become a middle manager somewhere.
A vanishingly small percentage of that content is generated as part of that middle management job. Yes, many people choose to place themselves on publicly accessible video, but it mostly isn't part of a mid level office job, so not doing so isn't incompatible with holding such a position.
You actually don’t think that even mid-level execs much less lower-level people who want to advance in part by speaking at events don’t end up appearing on video? I know I’m on plenty of it.
Even if not videos, most people have a bunch of photos in the public, or widely available. Even if the person themself doesn't share, other people around them will. It would take a very insular life to have no photos of yourself in public domain.
I once told my colleagues that I didn't think they could find a photo of me on the web.
5 minutes later, one of them came up with a pic: it was a group photo of the company staff, taken a few weeks earlier (with me skulking at the back; I never wanted to be in the photo). It was in an article on the company blog.
Volume and lack of metadata is effective anonymity for most people in most circumstances if they've avoided doing anything that creates a public presence. But most people probably have photos at least on the web even if they didn't put them there.
> If you refuse and it's an actual emergency with the real CFO, it might be a career limiting move, if you don't get fired.
This is really the crux of it: senior management needs to take the lead setting up policies which are efficient enough not encourage people to try to bypass them and the culture that everyone in the company should feel comfortable telling the CEO “I’m not allowed to do that”. This is possible but it has to be actively cultivated.
I've had a CFO that didn't talk to tech people except through proxy have a "tell your mom to pass the potatoes" style meeting with his secretary as medium. Yes I stood there he talked to his secretary and repeated what each of us said 5 feet away from each other. This was a large bank.
I've had a general council yell with spittle at me because I suggested that it was probably a bad thing that the IT Dept was effectively acting as power of attorney for the company by doing digital signing for him and he should probably learn how to do it himself for legal reasons.
When you get to choose a potentially career limiting move by speaking to a CFO or a freedom limiting move by doing a potentially illegal thing they say... It may be a good idea to do the first one unless you're in really bad situation with work availability.
If they can throw you under a bus because you raise a valid issue, what are the chances they'll protect you when some fraud paperwork gets signed by the IT dept (so you).
I'm just saying the problem is basically systemic. Powerful people in charge are going to do what they are going to do. Very few will voluntarily place restriction upon themselves even for their own good. The person that sent the money probably did it because the CFO had a history of acting like a child/irrationally/short fuse.
Very few CEOs are going to make people feel comfortable telling them no.
My anecdotes were to illustrate its widespread if I've personally encountered it multiple times. Also just to entertain.
Yes, that’s why I described it as a management responsibility. That kind of dominance culture is very common and it basically ensures this kind of stuff will keep happening, similar to how all of the phishing training in the world is largely cancelled out by not requiring partners and vendors to have better email practices. It might take that CFO featuring in a crime like this one to get their attitude to change.
Just as every major company now sends out fake phishing emails, we'll need to normalise sending out fake emergency emails from your boss saying that you need to transfer money somewhere.
It might not matter in the extreme case as there could always be a sufficiently serious emergency that will force their hand to bypass every policy. e.g. if they get a National Security Letter.
That’s not Joe CPA’s problem, though, beyond verifying that the men in black have valid government ID. If the FBI raids your office, you’re not the one in trouble for it.
Let’s not ascribe too much power to those, either: NSLs can compel release of certain types of information but they can’t force you to do things like transfer money or even disclose the contents of private messages.
I would assume the matter of time for it occurring has elapsed a while ago, and now we are in the place where it's not only being detected, but further, actually revealed, regardless of how embarassing that is.
> How long before scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP.
Unfortunately, this is why we need open access to some deepfake tech. The only way to convince people who are not immersed in tech how convincing deepfakes can be is to sit with them, and create their own deepfakes.
Then memorize and practice security protocols like verbal passwords.
The issue with people disregarding security protocols goes much deeper than them being unaware of what's possible. People just hope nothing will happen and avoid thinking about it. You're facing "Who's got time for that stuff? We have actual work to do!" and "What's so important about our data/access privileges/whathaveyou anyway? Nobody will bother stealing it."
There was an old theory you needed to be holding today's newspaper or mention current events to at least show that a media was not prepared earlier but this advice is out the window given enough dedication from the adversary.
That's already happening successfully without deepfakes. Scammer calls and says "grandma I'm in trouble, they are holding me in jail unless you buy gift cards"
recently a group targeted expat/temp students and their families. they somehow coerced the kid to go camping don't pick up to anyone, and then they told the family the kid is with them. the family paid.
Seems like it can be stopped dead with standard crypto, smart cards and multifactor tokens, multiparty authorization etc. Ideally, issued by public authorities together with any other official ID, leveraging the strong security governments have already built around that process.
The generic type of vulnerability referenced in the latter part of the article has sprung up after fintech tried to emulate traditional offline auth and KYC with things like scanned images of ID documents, face recognition and liveness detection. Anyone in the know could see these attacks coming miles away.
It's easy. We just generate our own key pairs, establish a web-of-trust by signing each others public keys at in-person meetups, and then use those signed keys to authenticate all the digital communication we do with each other.
You know, like we've been doing with our emails since PGP was developed in 1991. You can tell how simple the process is, by how ubiquitous it has become in a mere 30 years!
I think the poster meant the prior meaning of the word 'crypto' -- cryptography, in which the CFO could sign and encrypt some message and then the message's authenticity could be verified.
"Can you buy $1000 worth of egift cards and text me back with the redemption codes? Our jobs depend on this. I'm in a very important meeting, otherwise of so it myself, left my private key at office and can't sign this message right now."
I think many people would expand the word crypto to cryptocurrency and not cryptography. We can argue on and on about which is the "correct" expansion but in my opinion a word's current meaning should be the most popular association people have of it.
Only on HN do I find people who actually know what cryptography is. Almost all the people I know have never heard of it, but all of them have heard of bitcoin, and most have heard the word crypto being used with reference to cryptocurrencies.
That's not to say that my experience somehow means more than yours or is more valid. But I personally think my experience is more representative of the average layperson. You're welcome to disagree.
Sure the non technical world is (sadly) more familiar with Bitcoin.
I specifically said the technical world. Most people I know are technical to some degree and almost all of them would assume cryptography when they hear the word "crypto".
How does crypto add anything that just verifying email ID/phone number doens't provide. If you solution is to whitelist some certificates or key, you can as easily or even easier whitelist email IDs/phone number.
Cryptography can and should be done on hardware tokens that should directly be reported as stolen. A video call with email/phone is easy to fake.
I work with people who all have hardware crypto, you are right that we do not have the organizational knowledge to verify everything with crypto. Even if the tech is 60% there.
Email means I got access to your device or something you’ve configured to be able to send email, which is probably a lot of servers unless you have an entire domain dedicated to financial messages everyone knows not to trust any other domains.
A message signature means I got you to do something like tap a Yubikey and enter a PIN, touch a fingerprint sensor, etc. That can still be socially engineered, of course, but it can’t happen by accident and you could add some safeguards against routine by having a dedicated “major transactions” key used only for that purpose to add a physical speed bump.
The problem is that “ignore my gmail, I list my phone” will defeat that training more often than we’d like, so you really need to have process safeguards which make it a requirement and management backing to say even the CEO will follow the lost device process rather than asking someone to bypass process, and that has to be so carefully enshrined that nobody questions whether their job is on the line if they tell the real CFO that they can’t bypass the process.
> Yubikey and enter a PIN, touch a fingerprint sensor, etc
Laptops and mobiles have all the same sensors. Most big companies have organization wide password, fingerprint and auto screen turn off requirement. Obviously not all companies follows good security practices or doesn't give secure devices(with sensors and encryption), but if that is the case Yubikey isn't going to save them.
From what I gather it depends on the carrier. T-Mobile is supposedly the easiest and Verizon the most difficult. The Darknet Diaries (link below) recently did an episode on how the sim swapping thing works and how expensive it is to get it done.
Care to explain how can I spoof other's phone number. Also phone is as hard to steal as any device where key is stored. In fact, people will remember their phone is stolen much before than the usb key or laptop or anything else.
There is an authentication between your phone and your telco, but there is no authentication between your telco and others. Any telco in the world (and there are many) or someone who has bribed (or hacked) someone who works there can say "this phone is now roaming our network" and traffic gets routed there.
These things are usually discovered but not before a call or sms goes through. There are also other possibilities such as diverting calls available to someone with the right access to the signalling network. Anything that's unauthenticated and unencrypted should be regarded as insecure, really.
> someone who has bribed (or hacked) someone who works there can
There is literally no encryption that could handle this. By that logic if I bribe or hack yubikey company, then they could ship malicious batch of yubikeys. Or I can bribe or hack microsoft/apple to get root access of someone.
There is (or was) no authentication within the core of the public switched telephone network, since it was designed at a time when that was impractical and physical infrastructure was assumed to be reasonably secure. So you don’t need to fake signing, you just say “Hey, +1-555-555-5555 roamed onto my network and is making a phone call” and the recipient takes this at face value. (“Blue boxing” to fake the phone system into giving you free long distance phone calls worked for similar reasons.) STIR/SHAKEN is supposed to fix this, though I don’t know how far along implementation has actually gotten.
They let untrusted people in on the trusted network, basically. Telcos are no longer considered national security. Privatization has given us a cheaper and better communication, but security hasn't always kept up.
The authencation above is between the terminal and the location registry (simplified, there are several other components involved). Not internationally between the telcos.
If you can get S7 link with Telco, in most cases it's trivial to spoof Caller ID signals, as those are essentially forwarded from originating network. Getting direct S7 link isn't as hard as it sounds, it's IIRC common thing if yo want to run VOIP provider.
Your telco's NOC can at best track what "port of entry" the call came from but can't force the Caller ID go be truthful.
I imagine it has changed, but 10-ish years ago I recall having a cheap VoIP account that just let me enter whatever phone number I wanted as the caller ID.
It's very much a "honor system". If VoIP provider doesn't do due diligence, the other networks can't really check the value, especially since number porting became norm
For the first few dozen times sure, but after the hundredth or so report of a scam call associated with a spoofed number, the VoIP provider should be blocked by the telco. That is if they were allowed to do so.
Once you have lived through reporting to a telco you have been dealing with fraud - you learn the procedures are in a scripted loop, everywhere. The answer will probably be go to store get new sim and never reach conclusion it was swapped for people who do not investigate their situation. I haven’t dealt with sim swapping yet some pretty heinous organized crime now and the folks are nice yet you will never walk away knowing the cause or source of an incident.
Everyone in the call has a cryptographic ID that can be authenticated with a trusted authority. Your device would just ask all the others for a one time token that it then submits to the ID server. The server tells you public identifier of the person associated with that token.
We already have infrastructure for bus and rail tickets, for logging in to banks, tax authorities, health services, etc. in Norway and other countries that could easily be extended to cover this use case..
I don't know. Based on how it is described in the article, you could detect it via the means you mentioned and raise them as warning flags to the user, but as a last instance there will still be users that ignore all the warning signs and be convinced by a good scam story.
...such as a person much higher up in the organization giving you a direct "urgent" order. It shouldn't be hard to find corporate employees who really fear their superiors.
Then it's the fault of those superiors for setting up a culture of fear and mindless subservience, instead of one of strong rules even they themselves are expected to follow.
Cryptography without strong social rules is just cargo-cult religion.
>I guess it was a matter of time before this occurred. How long before scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP.
Umm where have you been the last decade? The "Grandma help me I'm in a foreign prison and need you to buy iTunes gift cards" scam is extremely lucrative.
Opening with the line "Umm where have you been the last decade?" feels like throwing insults, and not conducive to a positive enviroment to learn from one another. You probably didnt mean it that way but though Id point out this style.
Regarding the last decade,, the pertinent part of the comment you responded to is "do bulk video calls to parents/grandparent pretending to be the kids" - more referring to when these existing scams hit a higher level.
> How long before scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP.
A friend of mine in the US actually personally knows two people that this has already happened to, albeit with audio only. With video it's going to be nuts.
In France there had been cases of employees wiring money convinced that they were talking to their CEO/CFO/lawyers over the phone.
Many cases were due to a Franco-Israeli gang arrested in 2022/2023 that managed to make at least 38M Euros out of it. They impersonated CEOs without the help of deepfake AI.
See https://www.europol.europa.eu/media-press/newsroom/news/fran...
the scary part is how easy this would be to do right now, especially for a larger, higher-profile company. leadership is almost synonymous with an online presence in the form of podcasts, interviews, youtube videos, conference talks. combine that with public photo-sharing app profiles, and you're in business.
It's a sophisticated attack for sure, but the data collection really isn't too difficult now. A minute or two of audio is sufficient for voice, and a single good image.
Only if you intend to run the scam only once, or if all of the work is completely bespoke and not reusable for future attacks.
That seems unlikely. I'm pretty sure there's actually a lot of economies of scale here, where the attackers' pipelines will become vastly more efficient and higher quality over time, with each attack requiring less manual work.
I have no idea how something like this can even happen. In a company of that size it should be actually impossible for a transaction like this to occur without clearly documented processes to ingest, review, authorise and pay transactions.
I have clients where anything over even quite a low set limit (say €10k) requires multi-party authorisation - and it's very common for the person entering payments to be unable to authorise payments. That's just good practice.
A payment should not be able to be queued without a PO number. If the payee is new, the bank details need to be verified by phone. Once approved as a destination account, that payee is set up in banking, and authorised by a finance clerk and someone more senior. At the point a payment is requested the PO and other details should be double checked against what is in the system. If there's a match, then the payment can be queued for authorisation. The person entering payments and the people approving payments should be entirely different - and it should be people, not a single person. When payments are entered, the payments should be reviewed by first authorisation - a finance manager, for example - and once that authorisation is conducted, depending on payment limits, another authorisation or authorisations will be carried out.
I worked at an investment bank which made daily FX transactions to cover trading in world markets to their nostro accounts, and these could easily be in the 10-100 million range on a given day. Transactions like these are not particularly surprising in that context, so processes will be in place to reduce the workload on operations staff, so that they only need to validate exceptional transactions.
If you have 10 business units trading 50 world currencies, checking 500 transactions for FX every day is a total chore hence it would get automated, and only unusually large transactions would be flagged. Rules like <10m goes through automatically would be tuned over time so that the workload on operations team members would add actual value without being onerous on their time.
So, depending on the business we are talking about, a 25m transaction could basically be lost in the noise. Given the mention of the CFO being london based and the operations team being in HK, it sounds like a typical investment bank setup to me.
but i assume these daily transactions are going to same validates target accounts, that are nnot changing daily. in this case i assume this was a transaction to a random account.
Yes, that's a good point. The nostro accounts don't change often, but they do change as new business lines come and go, but I don't remember the validator having any rules based on the target accounts in the system I was involved with. However I may be wrong, and that was 10 years ago, so things have probably moved on.
> I have no idea how something like this can even happen. In a company of that size it should be actually impossible for a transaction like this to occur without clearly documented processes to ingest, review, authorise and pay transactions.
After having worked IT for various startups I cannot understate just how much executives and other higher ups detest policies that make them verify who they are. It short circuits something with their ego.
True, I was closing a real estate deal once with a rich guy and he called his private banker for something. He had a near-meltdown that they asked some kind of verification question from him.
At the end of the day even in large firms you only need to fool three or four eyes. Those eyes might get a lot of transactions to process and a certain sense of complacency might occur. The hope is that automatic controls will aid those humans with all kinds of checks, but even billion dollar transactions at the end of the day are human transactions.
I have been witness of spreadsheets passed through email, whatsapp, etc from one sector to another to initiate payments. It's all about trust perception. That is one of the weak links.
I don't get it. I work for a biggish company. Every time a user wants to join my Miro team I have to use a maze of ancient purchase order systems like Sage with multiple levels of approvals from our finance team. It's almost outrageously draconian but... not a penny goes by unpinched.
I’ll give you good odds that if you ever talk to the CFO about the transactions they personally sign off on, it’s a lot of emails and spreadsheets passed around. Processes are there for the little people, the big ones are chefsache. I also know what the biggest risk are. Not the automated stuff, not the very big M&A stuff, it’s the not yet automated routine combined payment order that is boring but rests on a few insiders to keep working. Insiders are very much in demand for these cons. The voice of the CEO is nothing, you need the proper tone, the proper pomp and circumstance.
You don't want them anywhere. The fact that people exist who can bypass detrimental processes that apply to low-level workers is what nvr219 was complaining about.
It depends what the multi-party authorisation is trying to protect against. Normally you're trying to protect against insiders stealing money by authorizing a payment on their own. In this case it's quite possible that multiple people inside the company signed off on the transfer and it all happened "by the book".
Social engineering is substantially about appealing to someone who can do all the steps you cannot perform from outside.
From what I understand of the literature, it’s often several interactions to gather enough information from several employees to learn to sound like you belong there, then using it all against someone with “keys” who escorts you the rest of the way.
Exactly. There are programatical barriers you cannot bypass alone.
I can imagine a scan where the fake CEO gets a phone or laptop outside of the process "because CEO". This however will still be limited to generic, low value stuff handled by single people in a company.
There is no way that a reasonably organized company can leak 40 MM USD.
Yes, but this is due to three people trying willingly to bypass the system, and probably a shitty UI. They knew what they were doing, they just did it badly.
Citi is one of the major foundations of the entire industry, how much of a "yes but" is involved before its standard practice. shitty UI is extremely common inside financial companies because fixing it would cost money.
> And Citibank software is really jenky (ph), so basically the only way to
> complete the wonky transaction is to sort of momentarily trick the software into
> thinking that Revlon has repaid the entire loan.
"There is no way that a reasonably organized company can leak 40 MM USD. "
Oh, please, HP lost some 40 million in inventory while contracted to Solectron Global for repairs, because their inventory systems are utter garbage compared to Dell or Toshiba.
Except these sort of transfers almost always happen with, at a minimum, dual approval where exceptions cannot be made because it's software defining the rule.
1 employee submits the transaction for review, and a 2nd (and sometimes a 3rd, 4th) person must approve it before the payment initiates. There isn't typically a bypass function.
Also, CFOs are typically responsible for setting up and enforcing these controls. A big part of a CFO's job is to manage risk. If you work under a CFO, you would be more likely to be rewarded for following the process than be punished.
Obviously there are exceptions to this, but by and large no CFO would punish a finance person for disobeying an order to bypass a process intended to prevent financial fraud.
When I say “no CFO” would punish someone doing things that mitigate fraud… it’s the same as saying “no software engineer intentionally introduces bugs on purpose”.
Obviously the statement isn’t literally accurate. Hopefully it’s 99% accurate (otherwise none of us would have jobs if all we did all day was sabotage our employers). Likewise, not every CFO is to be trusted, nor are all software engineers… but most can be.
I think there's a lot of people here making the wrong assumption about how their CFO would react to being told no.
These people aren't stupid. I'd expect them to understand risk better than your average senior software engineer and if you tell them "Sorry boss, too risky to do that right now. I can't be 100% sure this message is genuine. Let's sync on this after your meeting", your chances of promotion at this company would likely rise, not fall.
a common thread in these modern rogue trader scandals are that the perp worked on the controls or monitoring system in a role prior to becoming a trader
so they knew how to structure their trades in such a way to evade detection
I feel like I’m missing something from your post. Are you being asked to approve several large financial transactions per hour in your job as a software engineer?
Regardless, approvals for multi-million transfers require a higher level of process and approval.
Maybe, but those multi-million transfers are usually going to known trusted counterparties (other brokers, bank treasuries), not random vendor accounts.
> In a company of that size it should be actually impossible for a transaction like this to occur without clearly documented processes to ingest, review, authorise and pay transactions.
Oh, my sweet summer child. The larger the organization, the more dysfunctional it becomes.
See How this scammer used phishing emails to steal over $100 million from Google and Facebook
This was a case where someone pretended to be other people at the company over video calls. It's not a huge leap for that to happen to multiple employees - if it didn't happen here having multiple people doesn't eliminate this attack.
All the checks you describe - multiple approvers, standing data, callbacks etc - the guys going after big payments like this know these checks are in place, how they work and have a game plan for it.
But does that fix the problem or just slow it down a bit?
If you can deepfake one guy with the checkbook, can’t you deepfake the guy with the checkbook and the guy who enters the POs into the system? Lower odds, but far from zero.
I like the juxtaposition of this comment and the one before it, saying "if we had to authorize every transaction of a few million, we wouldn't get any other work done".
You don't understand current banking as its happening right now, simple as that. Also, you probably didn't read article since it clearly states it was a 'secret' transaction, most probably meaning bypassing all controls put in place.
I mean we still right now live in the world where just a very rough match for signature on a piece of crappy paper is enough to move millions if needed.
Cooperate processes are not laws, cooperations are not states, they are thiefdoms and of course the baron gets todo as he wish. Why whenever that illusion of order crumbles away, have this sort of public meltdown just because one is powerless and exposed to be trampled at any moment by random forces? This is just life and this is just part of a medieval peasants existence, towards which all of HN helped culturally steer this ship. Get over it, get on with it..
> Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out. However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.
this is the real problem. why oh why, after suspecting an email as phishing, would you then go on to even click ANYTHING, let alone join a video call?
insanity. either stupidity or he's lying about suspecting the email. how many corporate security trainings does it take? this is just about 101. "if asked to do a secret task by a suspicious email, DONT do it"
> how many corporate security trainings does it take? this is just about 101. "if asked to do a secret task by a suspicious email, DONT do it"
It takes $CURRENT_NUMBER + 1.
People are still, to this day, racking up thousands of dollars in iTunes gift cards on corporate cards and mailing them out, because they got a text from "the CEO". It happened at my spouse's work just last year. It'll continue happening again, forever, because to paraphrase P.T. Barnum, a sucker is hired every minute - in the probability distribution of humanity along that particular axis, there's always going to be some percentage at the bottom who'll fall for the most obvious scams. Sometimes repeatedly.
> "if asked to do a secret task by a suspicious email, DONT do it"
This is not what they teach you in trainings, though. They teach you to get the requestor (or your boss or whoever might be authoritative) on the line and confirm that the email is authentic. I believe a video call qualifies as well.
Depends on who initiates that video call. If the "boss" calls you and says "did you get the email" that is suspect at least if you don't carefully verify the origin of that call. Basically any communication that you don't initiate might be fake.
Have you ever actually done corporate security training? It's very obviously 100% useless and not going to teach anyone anything.
A company I worked for actually started sending test phishing campaigns which is a lot more effective, but I thought they were still pretty obvious and also it led to stupid people reporting them on Slack endlessly.
>It's very obviously 100% useless and not going to teach anyone anything.
I've seen some decent ones. e.g. One that was presented from adversaries PoV which I thought was innovative & got people thinking about it in novel ways (at least did for me).
This should like bad company processes all around. For a sum this high, you need more than just a video call. Get an email (if the tech team setup DMARC correctly, sending phishing from company-domain is near impossible). Talk through company chat (Slack, Teams, etc). Call a couple high ranking on their cell.
It's one of the better ways to avoid getting scammed: Try to validate the communication in ways without relying on any information they gave you.
If someone claims to be a police officer and hands you a number to call to see if they are real... don't use that number. Figure out the non-emergency number of the station they claim to be coming from independently and ask them. If a "new agent" from your bank calls you and gives you a "new number" to call them, figure out an official number of your bank and call that.
Yep. Previous scam victim here. This step would have halted the scam I fell for.
Also, whenever paying new accounts, once you've independently reached the person you think you're talking to, always do a test transaction and make sure they get it before sending the rest.
If they want to do business like in the 21st century they can invest in 21st century security and polices. Otherwise get on the darn plane and do it 1970s style
What is the chance that a CFO gets in touch with a deepfake specialist and they split the profits? I’m not saying that this is what happened, I’m more focused on future scenarios.
Or even better: have CEO & COO hold a secret emergency meeting with the CFO; after the money transfer the CEO & COO deny everything, claim they weren't there and were deepfaked.
I thought the value of these types of people are their people skills.
Nobody would accuse me of great people skills and while I'd like to point to my technical acumen as the reason I can spot fakes like this easily, it's my primate brain that knows something is wrong.
Just like security specialists can follow a phishing link, I'm sure employees with good people skills can be conned into sharing details with a deepfake colleague.
Social engineering works because people think they could spot it.
Just because I am curious, and have not seen any software capable of fooling me in this regard, yet, what would somebody use to do this? Is this an already existing product that can create video representations of people I know so well it would fool me?
>I have not seen any software capable of fooling me
That belief is a catch-22, though. By definition, each time one fooled you, you didn't note anything other than a run-of-the-mill normal video. A lot of tiktok accounts lately are dedicated to deepfaking celebrities. For example, if I hadn't already told you and you just casually scrolled by it, would you immediately suspect this isn't Jenna Ortega https://www.tiktok.com/@fake_ortegafan/video/732425793067973... ? I didn't look for the best example, that was just the very first that came up.
>Is this an already existing product
Usually cutting edge ML has to be done with a github repo last updated a few days ago using Tensorflow/Pytorch and installing a bazillion dependancies. And then months later you might see it packaged up as a polished product startup website. I've seen this repo a lot https://github.com/chervonij/DFL-Colab
There was a paper linked on here recently (last few months?) that showed off video call deepfaking using gaussian splatting, essentially using a webcam to "puppet" a very convincing 3D recreation of another person's head & shoulders in real-time..
I tried to find the link but my search-fu is not good today it seems..
There's also the fact from the article that this was an employee in Hong Kong on a video call with people supposedly in the UK, so it's also possible they took advantage of bad video quality to do this..
Get on video for the first minute or so, then, as we've all done, say "I'm going to turn off my video so my connection sounds better" etc...
This is where those 'security researchers' are helping to make such fraud easier. If you release these tools into the wild you are enabling criminals who by themselves would have no way to create these tools.
Security through obscurity does not work. As soon as deepfakes have proliferated on TikTok for stupid stuff, they'd inevitably be used for this kind of exploits by any adversary that is motivated enough to do a directed operation on a high value target.
The researchers really just raise awareness on where things are going, but ultimately the solution will be to improve process and verify anything that has to do with money through specific internal company channels that are hard to forge - and anybody in a call like this that would not use them needs to automatically raise an alarm by procedure.
Inventing new tech that has very obvious negative uses and zero positive ones isn't 'security through obscurity', it is security through responsible behavior to say 'maybe I shouldn't'. Just because you can doesn't mean you should.
Just the idea that the perps in this case had the ability to code this all up by themselves is ridiculous, 99.99% of the cyber crime out there is point-and-click from some downloaded tool and maybe 0.01% 'hackers' that use their own tools. Releasing all this junk in easy to use form is a very large factor in the rise of cybercrime. Imagine an outlet on every streetcorner where advanced weapons were given away freely and then to make the claim that since someone could theoretically come up with any of these there is no reason why we shouldn't be giving them out for free. That's roughly the level where we are at.
There is some middle ground between researching how things could be done and releasing those tools to every wannabe criminal on the planet, many of who are in places that you'll never be able to reach from a legal point of view. 1000's of businesses are hacked every day by tools released by 'researchers' to prove that they are oh-so-smart without a shred of consideration for the consequences.
I'm still not sure what you suggest. Do you want to police the world of software, only allowing stuff to be released that has obvious use and limited negative effects? That won't really fly in a liberal society.. people will tinker unless you want to go the dystopian path.
I mean sure, you can nicely ask or try to shame people, but when did that ever do anything of note?
I'm at the point where I see the whole security industry as parasitic. It's an industry that only exists to keep itself and the people active in it employed to the detriment of the rest of society. You want to research security stuff? Cool: keep it to yourself, don't release it. Because if you do release it then the only people that will really benefit from it are the bad guys and no amount of handwaving about how blessed we all are that you're releasing these exploits into the wild (and they are exploits, even if 'deep fakes' look superficially like they are not) for free and bragging rights is beneficial to society. It isn't. Having these skills should come with enough of a sense of responsibility to know how to use them without causing a lot of damage.
All we're doing is enabling a whole new class of criminal that is extra-judicial and able to extort and rob remotely whilst sitting safely on the other side of a legally impenetrable border. As long as that problem isn't solved there is a substantial price tag affixed to giving them further arms for their arsenal. The bulk of them are no better than glorified script kiddies who couldn't create even a percent of the tools that the security researchers give them to go play with.
There are strong parallels between arms manufacturing and the creation of these tools and the release of these tools into the wild. Without that step there would be far less funding for the security industry as a whole and I don't think that's an accident: by enabling the criminal side the so-called 'white' side increases its own market value, they need the blackhats because otherwise they too would be out of a job. Meanwhile the rest of the world is collateral damage, either they see their money stolen (check TFA), they pay through the nose to the 'white hats' to keep their stuff secure (hopefully) or they pay through the nose the black hats due to extortion and theft.
I wished both parties would just fuck off, but only one of these is hopefully amendable to reason.
Being part of the security industry, I'm certainly not impartial, but your view seems to be a bit naive and you seem to be generally angry at the world.
Thing is, when computers permeated society in the 90's, everything looked so simple and wondrous, few people did nefarious stuff and if so mostly for fun. Now during the 2000s computers matured in companies to a degree that they became fundamental infrastructure, and that's where complications start, as someone eventually wants to take advantage of that to make a profit without regards to the means. The Internet bringing the world closer together of course changed the playing field.
Now trust me, many companies would love to sing kumbaya and ignore the topic all together, but that's just a way of presenting oneself as a low hanging target, as many have painfully recognized. And that includes low skill and targeted attacks on all levels. That's why there is a security industry, because IT infrastructure became so fundamental to how we do business.
Now it's a part of everyday life, being a risk the same as other externalities, like market cycle, supply chain and a million other things. The main issue really is, that back in the day nobody cared all that much, so there are few people that got into this branch, and thus there is a constant shortage.
But generally, the kind of stuff like in the article is just one of many security threats both low and high skill that companies are facing and they need a sophisticated system/process to categorize and counteract them (both in terms of prevention and damage mitigation). Unless you manage to remove global inequality and the incentives to exploit affluent entities, this reality just is.
Now I know this sounds grim, but statistically we are currently way better off than just a few decades ago, much less centuries. Things get better. It's just in our human nature to bitch about it anyway. Just take a deep breather and enjoy your shipping free delivery of basically anything you could want at reasonable rates straight from the other side of the world while looking at the bleak news than in no way reflect statistical reality (like, nobody wants to hear how good things work compared to 20-50 years ago, that's boring).
The idea that the criminals are broke, talentless hacks is so wrong. They're the ones with the time and money. Especially more than industry researchers do. If some researcher finds a vulnerability in some widely-used software / device, high chance a malicious actor has already found it or will soon. Not sharing research is how you allow them to operate in the dark.
I have known two publicly traded companies that fell victim to similar sorts of scams (someone impersonating the cfo or ceo over the phone). One was defrauded out of a seven figure sum, the other got lucky and a bank involved stopped the transaction to verify again. I don't know how the first was able to keep it quiet, I only knew because I chatted with the people in question. I suspect that the deepfake angle makes it easier to admit that they were defrauded in this way.
Talking about how something like this can happen in a big company is fun and all, but the scary thing is is that it is _so much easier_ to do these sorts of scams with deepfakes. Which means they will be deployed against "softer" targets, like you and me, and your parents and grandparents.
There really ought to be a stronger sign of confirmed identity in business calls. Something cryptographic. Every single day I end up in business calls randomly scattered across teams, WhatsApp, FaceTime, zoom, and a half dozen other systems. Instead we get stupid cartoon avatars and the ability to put a funny backdrop behind us.
I'm not sure if we have good enough systems in place, in terms of UX, for that to work.
Imagine every C-level exec who's opened a top-urgent ticket with IT because their printer doesn't work (they forgot to plug it in/forgot it needs paper/it's not a printer, it's a paper shredder) trying to operate some form of key exchange software securely, while people capable of pulling off this sort of scam are targeting them.
I don't think this is a problem that can be solved with technology.
Many comments are saying how finance companies should do more authentication for large transfers and the common response for that is a few million transferred is routine in that field and authentication of transactions the size of the onr quoted in the article would be impractical. The response to that then is if 25M doesn't matter enough to verify in some way then companies shouldn't cry when it's stolen. Then losing a few million here and there which apparently isn't worth authenticating the transfer of is just the cost of business. It's either worth the extra controls or it's like snacks in the rec room, not worth worrying about.
This opens also the possibility to steal money and then pretend that you were fooled by AI impersonating your boss to give you orders that you can't refuse. Sending fake videos of your boss to yourself. Create a smog curtain and give the police a digital hare to chase that does not exist. Wow. Looks like the plot of a new Netflix series.
The old adage "follow the money" still holds, you'll be forever monitored until it shows up. It's even easier in the digital world where they don't need to park a pizza van outside your house.
I think there would be a good chance of getting caught if you did that. The police would be involved at this point and would definitely consider that as something to investigate before concluding the case.
It's all about knowing the limits.. most theft like this isn't really reported / investigated by the police. Companies lose money in stupid scams and thefts all the time, it's usually just written off. $25M would obviously be investigated but $25k?
The secret challenge exists and it is the phone number / email address / VC account of CFO. If CFO wants to order EMPLOYEE to send money, then EMPLOYEE should only do the action after making an outgoing call to CFO.
100% agree. "Hang Up, Look Up, Call Back" should be made into a jingle and absolutely hammered into the culture of, at this point, literally everyone (given all the scams that occur targeted both toward consumers and employees): https://krebsonsecurity.com/2020/04/when-in-doubt-hang-up-lo...
The CFO already separately sent him a message before the call, and I wonder if they'd get access to the CFO's number in a central directory (leaving aside the fact that you're asking to message them while they're live "in front" of you).
I fthe CFO gave a number on the call, it wouldn't also be much of a check.
I think the real improvement would be to have the CFO file a ticket, but obviously that company was used to play it loose and fast.
For a finance worker I actually wonder how much it means to transfer $25M.
I have no idea, but I suppose moving funds from one subsidiary to another for instance wouldn't be for a few thousands only, and he's seeing money fly around day in day out. Would it feel the same as an infra engineer rebalancing a few millions of access from a cluster to another ?
Money transfer, or any non-revocable transaction for that matter, should require multiple sign-offs (a.k.a "two/X to sign"). Businesses have been using this for decades.
This problem isn't a technical one..it's a process issue. One person shouldn't be able to transfer $25m without multiple people authenticating and authorising.
Was expecting this to happen soon and I guess soon is now. Will Zoom, MS start to compete on participant authentication features they are probably going to add?
> Chan said the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer.
It wasn't just a fake call, and he had a paper trail of the order...at this point it's pretty hard to prevent this from happening, short of having every order double checked by some other independent entity.
it’s trivial to avoid. Do not accept instructions outside of the standard instruction channels. The only reason this scheme works is because of bad processes, bad training or a culture of fear (where employees feel compelled to comply with any demand regardless of process for fear of losing their job).
If an employee routinely receives email or zoom instructions to transfer $25m without any sort of sign off then the company is completely at fault for terrible process.
> Do not accept instructions outside of the defined company processes
Most non-enterprise companies have fairly loose wire protocols. That said, outgoing phone calls to two separate signers is a good, simple best practice.
The standard instruction channels are so reliably shit, nobody bats an eye if they get an email saying ”Teams is on the fritz again, please join us on Zoom instead”
Don't know the details here, but email is still very much broken, and a number of large companies, including in the financial sector, are spoofable even after checking the usual boxes.[0]
Perhaps I'm reading too much between the lines, but this part makes it look like he got suspicious and checked for clues. It would have been pretty bad if the email was actually marked as internal.
Sam deal for the call as well. I'd expect the video client to warn that some members of the call
are external to the organization (Google Meet does that). Or the CFO is expected to be outside (from another org) from the get go.
> Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.
That's how I almost lost £100k. I got an email from my lawyer instructing me to pay an amount that I was expecting to have to pay, but to the wrong account. The email "from:" was definitely my lawyser's email address. It satisfied Gmail's spoofing checks. But it was not my lawyer who sent it.
Honestly, half the time I am interviewing random contractors around the world. I get a feeling they use OpenAI to answer questions. I have thrown out the typical “leet hacker” bullshit questions and rote memorization type stuff. Gone back to simply quizzing them on their own resume, digging into the finer details of what they did. Can’t deep fake experience, yet.
Why embezzle from one company when you can steal from lots of them?
This is an obvious and natural evolution of the kinds of attacks that have existed for years. It was bound to happen eventually. I think it's just sooner than people expected.
i reckon we're going to see this used for pump & dumps too at some point, ala a deepfake of some big pharma exec talking about acquiring some small biotech.
It’s so strange to me when people change a perfectly good title into something worse here, but it happens so often I honestly think the site should just change the functionality.
CFO is one character longer, and if that really was the issue, which is very unlikely, I can think of 100 ways to shorten it without changing something important.
As long as they're with upper case you make the correct distinction: these are brands or trade names, not verbs or nouns. 'Zoom' isn't even close to that level. Besides their software utterly sucks.
Pretty much every single hotel gets a call from Mr. Patel at night asking to wire money due to an emergency. A lot of hotel employees fell for it and wire money. These employees even drill open the safe. Some even wire money from their personal account.
This scam is mostly social engineering without any AI/Deepfake. It's going to be a fun time ahead for everyone.