* Your funnel starts digital and cheap (email, say).
* You need "warm leads" out of the funnel, and your closers are expensive (call center operators usually in SE Asia), so you prune to only great leads. You do this by making the email something only very credulous people would believe.
* You aim for nearly 100% close rate once you get them on the phone, since closers are expensive, 1 hour closing is 1 hour spent of human time.
There are two things AI with nice English accents that's scamming you do to change this: first, they make closing cheap, so the funnel can stay wide earlier. This means we'll be seeing much more plausible / hard to decide if it's a scam content -- there's no need to prune skeptical people so early. Second, the LLM is much smarter than, let's call it your bottom third call center operator, allowing you longer in safe direct contact with formerly inaccessible leads.
The economics here mean we're going to see a LOTTT of this over the next few years, and it's likely to change how we think about trust at all, and how we think about open communication networks, like the phone system.
If someone claiming to be from your bank or Google or Amazon or wherever calls and says they need some sort of secret like a one time code, just say no. Or, say that they have the wrong number and your name is Ben Chode.
Not sure how this would fool the most adept people, since regardless of what someone says they will never ask you to provide credentials. And when in doubt just contact support directly. Also a red flag would have been them saying account was compromised and downloaded data a week ago. Google wouldnt admit that since could end up on hacker news next day and crash stock. They would wait for person to contact them or resolve quietly.
They didn’t know it was from Google Sydney until they looked up the number after talking to the person. So when the call first came in, yes it would have been from a known number/person.
* Your funnel starts digital and cheap (email, say).
* You need "warm leads" out of the funnel, and your closers are expensive (call center operators usually in SE Asia), so you prune to only great leads. You do this by making the email something only very credulous people would believe.
* You aim for nearly 100% close rate once you get them on the phone, since closers are expensive, 1 hour closing is 1 hour spent of human time.
There are two things AI with nice English accents that's scamming you do to change this: first, they make closing cheap, so the funnel can stay wide earlier. This means we'll be seeing much more plausible / hard to decide if it's a scam content -- there's no need to prune skeptical people so early. Second, the LLM is much smarter than, let's call it your bottom third call center operator, allowing you longer in safe direct contact with formerly inaccessible leads.
The economics here mean we're going to see a LOTTT of this over the next few years, and it's likely to change how we think about trust at all, and how we think about open communication networks, like the phone system.