Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And obviously untrue. If you’re an employee who just caused a security incident of course you’re going to make it seem as sophisticated as possible but considering Retool has hundreds of employees from all over the world, the range of accents is going to be such that any voice will sound like that of at least one employee.

Are you close enough to members of your IT team to recognise their voices but not be close enough to them to make any sort of small talk that the attacker wouldn’t be able to respond to convincingly?

If you’re an attacker who can do a convincing french accent, pick an IT employee from LinkedIn with a french name. No need to do the hard work of tracking down source audio for a deepfake when voices are the least distinguishable part of our identity.

Every story about someone being conned over the phone now includes a line about deepfakes but these exact attacks have been happening for decades.



Fully agreed, saying a deepfaked voice was involved without hard proof is deflecting blame by way of claiming magic was involved.


I think its right to be skeptical, but its also easy to do this if you’ve identified the employee to train on the voice of. You could even call them and get them to talk for a few minutes if you couldnt find their instagram.


Instagram for sound? You must know some people that use it very different than the people I know (if they're even still there).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: