ICAEW.com works better with JavaScript enabled.

How to guard against voice cloning and deepfake scams

Author: ICAEW Insights

Published: 15 Jan 2025

As voice and image fraud becomes rapidly more sophisticated, Cyber Security Professor Oli Buckley outlines the current state of the problem and provides some countermeasures.

A new frontier opened up for financial crime in 2019. In what is considered the first case of its kind, fraudsters used an artificial intelligence (AI) voice clone of a German energy boss to scam the head of a UK subsidiary out of €220,000. 

The UK CEO thought he was answering a phone request from the head of his company’s parent firm to send the funds to an account in Hungary to close an acquisition. Once dispatched, the money was immediately rerouted to Mexico, then scattered around multiple, further locations.

The case only surfaced in commentary from the parent firm’s insurer Euler Hermes, which anonymised the affected companies and chief executives.

Early the following year, the branch manager of a United Arab Emirates bank took a call that he thought was coming from his head office director, requesting the prompt transfer of $35m. Like the 2019 case, the stated purpose was M&A closure. The request was backed up by emails naming a lawyer who had been authorised to coordinate the deal. The manager complied. An Emirati investigation later found that criminals had used AI voice cloning technology, supported by fake emails, to dupe the manager.

Advancing threats

Those are just two early examples of what has become a fast-rising scourge. Both cases were cited by Loughborough University Professor of Cyber Security Oli Buckley in a recent ICAEW talk to mark International Anti-Corruption Day.

According to research from Starling Bank, published in September, 28% of UK adults think they were targeted by an AI voice-cloning scam in the year to 23 August 2024. Adding to the woe, fraudsters are also using deepfakes to assist financial theft. As Insights mentioned in a recent article, an employee in the Hong Kong arm of engineering firm Arup was tricked last February by an AI-powered, deepfake video call into sending £20m to a criminal gang.

Speaking at Chartered Accountants’ Hall, Buckley warned members that deepfake fraud can also cast a wide net. In one recent case, scammers made a deepfake of Taylor Swift to front a social-media advert for a fake branding tie-up between the singer and Le Creuset cookware. The hook was if you send certain, personal details to a stated email address, your name will go in the draw for a deluxe pots and pans giveaway.

“The tools to do this have been out for a few years, but are now far more widely available and have become much better,” says Buckley. “Compared with how it was 12 months ago, the technology I’m seeing now is massively more advanced.”

Convincing result

In Buckley’s assessment, the biggest driver behind the growing sophistication of voice cloning and deepfakes is generative AI (Gen-AI). Looking at how Gen AI software fulfils users’ voice- and image-based prompts, he explained: “It doesn’t just throw back an à la carte recipe, like a restaurant would. It’s mixing the ingredients in a new way, rapidly selecting from almost infinite permutations of what your prompts suggest to hone the output. It’s not a million miles away from an art forger whose paintings get gradually less distinguishable from genuine works.”

Further accelerants, he noted, are accessibility and ease of use. Even though the technology is becoming increasingly complex, everyday users can easily download the relevant tools to their laptop or phone. Fee-charging apps cost as little as $5 to $10 per month, but there is an extensive range of voice-cloning platforms that are entirely free. Each requires only a minute of audio to produce a convincing result.

In one demo during his talk, Buckley freely conversed with an ‘agent’ – the technical term for a voice clone or deepfake – designed to mimic Michael Gambon’s Dumbledore from the Harry Potter films. On that particular app, agents are fully customisable: users can pick an agent’s age and even emphasise certain personality traits, such as charm, lightheartedness or gravitas. “This wouldn’t have been possible six months ago,” Buckley said.

Strengthening oversight

In a follow-up interview, Buckley explains that one telltale sign of a voice clone or deepfake is that its words may feel ‘off’ compared to its intonation. However, as the software is able to achieve a realism that may catch even hardened sceptics off guard, the main defences that companies should adopt are procedural, behavioural and cultural.

“Multifactor verification would be a positive step for financial approvals,” Buckley advises. “That may involve sending confirmation via a secondary channel, such as email, Signal or another, secure messaging service. It would be prohibitive to do that for every transaction, but for values over a certain threshold, or transfers destined for suppliers or parts of the world that are new to your business, it would certainly be worthwhile.”

Segregation of duties – ensuring that no individual has end-to-end control over financial outflows – is also important. “Processes whereby multiple members of staff are involved with the initiation, approval and execution of payments will strengthen oversight and close down single points of failure,” Buckley says. “And while technology may not be able to help us spot voice clones or deepfakes, it would be very useful for monitoring transactions in and out of a business and looking for anomalies.” 

Credit card brands are already using machine learning and AI to check customer transactions and flag up anything that looks unusual, Buckley explains. “The provider may ring you up and request some sort of verification. There’s no reason why large organisations couldn’t implement this type of software to track their transaction habits.”

Alongside those measures, Buckley stresses, staff need robust education; making employees aware that the technology exists and that cyber criminals could strike at any time is the first step towards protecting them. It encourages staff to think critically and objectively.

Finally, Buckley notes, culture is vital: “Accept that an incident is likely to happen and set up appropriate disaster-recovery plans. Arrange aftercare for colleagues who’ve been duped, as they may feel stigmatised and embarrassed. Plus, having the freedom to ring back your CEO right after a call with them to check that a request was genuine, which may once have seemed like a strange thing to do, should probably now be a cultural norm.”

ICAEW Interim Director, Trust and Ethics, Gareth Brett says: “For accountants in both practice and industry, this evolving threat landscape demands an urgent review of existing controls. Many traditional verification methods need reassessment. For example, voice verification – which many companies implemented as a security measure – may now be obsolete in a world of AI voice cloning. In this rapidly changing environment, accountants must take the lead in evaluating whether their current controls remain fit for purpose.”


Latest cyber security articles

Further resources

Resources
Cyber Security Awareness month 2023
Cyber security awarness

Each year ICAEW marks Global Cyber Security Awareness month with dedicated resources to help you know what to do when a cyber attack happens.

Browse resources
ICAEW Community
Data visualisation on a smartphone
Data Analytics

Helping finance professionals develop the advanced data analytics and visualisation skills needed to succeed in this insight-driven era.

Find out more
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more