How can banks manage the risk of deepfakes?
Cybercriminals are intent on deceiving people through AI technology, posing as either banks or their customers. As fraudsters’ tactics and tools increase in sophistication, telling real from fake is becoming much harder. How can banks manage the risk?
Data: a cybercriminal’s best friend
Today people’s lives are increasingly conducted online, leaving behind them a data trail. Malicious actors lurk in the shadows waiting for someone vulnerable, then leverage the power of AI to commit identity fraud, steal money or both.
“The more digital traces we leave online, the more data there is available for criminals to use to train these AI tools,” explains Jochem Hummel, Assistant Professor, Information Systems Management and Analytics Group, Warwick Business School.
AI can generate authentic-seeming content that tricks people into believing they’re dealing with a trusted entity, such as a business or a bank. As AI becomes smarter, it may be harder for banks and customers to differentiate between a fake and the real thing. “Not only can AI impersonate a banker, but also friends and family. It can even emulate you as a person,” adds Hummel.
Fighting deepfakes, with AI?
When it comes to deepfake scams and the prevention of ID fraud, the biggest challenge facing banks is the difficulty in recognising the real voices of their customers. Only the customer in question, or perhaps a close friend or relative, can do that. Banks will need to turn increasingly to technology as their main line of defence. “For banks, the technology required to tackle the threat from technology itself is a money-consuming race in which there is no choice but to partake,” explains Dr Viktor Dörfler, Professor of AI Strategy, University of Strathclyde.
Technology solutions are the sensible option for banks to prevent cyberfraud and protect customers. “While banks have very basic technological means of defence, such as two-factor authentication, there are more advanced options available nowadays, like 360-facial recognition scans or face-to-face video checks,” says Hummel.
The rapid pace of payments is also a breeding ground for fraud, creating a sense of urgency in banks’ mitigation practices. AI tools could be used to filter out unusual transactions: “Pre-emptively, AI could block anything suspicious and then wait for the bank to confirm the validity of the transaction with the customer,” says Hummel.
Educate customers and urge caution
“We’re learning about AI all the time and have to keep abreast of the risks. Take social media, for example. Many fraudsters use it to learn about their potential victims, so we need to be more careful when socialising online,” cautions Christian Garcia, CEO, Trusted Novus Bank.
Hummel believes that collaboration between banks, governments and educational institutions can help fight the threat from deepfakes. “People must be made aware of what AI can do and how we can develop and safeguard against its criminal activities,” he says.
Personalised campaigns targeted at specific generations could also raise awareness and promote caution, particularly among banking customers who are less technologically savvy. “Reaching people to warn them about what AI can already do with their data, that’s already available online, could help them become more cautious,” he adds.
Click through to the Chartered Banker Knowledge Hub for more insights.