In a cybersecurity landscape already filled with plenty of credible threats, voice deepfakes now add another type of headache to experts.
The Wall Street Journal just reported a case of voice fraud where hackers tricked a bank to make a €220,000 ($243,000) transfer to their account by using AI to “fake” the CEO’s voice.
In a troubling development, the hackers apparently used a deepfake voice created by voice-generating AI software that’s commercially available.
They impersonated the CEO of a German company with an UK-based firm, tricking a high-ranking official into transferring the $243,000 to a supplier.
Then, the attackers called two more times to ask for a second payment but, fortunately, the executive noticed the call coming from Austria and started investigating.
He then found that the money he thought he was ordered to send to a Hungarian supplier was moved to Mexico and “distributed to other locations.”
Voice-spoofing and voice phishing (vishing) attacks are nothing new in the world of cybercrime but, according to some officials quoted by the Wall Street Journal, this is the first time criminals clearly used AI to do a voice-spoofing attack.
For cybersecurity experts, deepfake recordings are causing some major headaches, as traditional security tools are unable to detect computer-generated voices.
Now, they have to play catch-up fast.
“While it is hard to predict whether there might soon be an uptick in cyberattacks using AI, Mr. Amann said hackers are more likely to use the technology if it makes attacks more successful or profitable,” says the WSJ report, quoting Phillip Amann, the head of strategy at the Europol European Cybercrime Center.
Meanwhile, deepfaked videos pose serious questions about society’s future in a digital landscape where nothing you see with your eyes can be trusted.