ICAEW.com works better with JavaScript enabled.

How to make AI work for your business

Writer Nick Huber talks to two accountants about the AI that has worked for them, and takes a brief glimpse at when plans don’t pan out and criminals harness AI.

"borrow"
Matthew Campbell is Head of Audit Technology at KPMG. The Big Four accounting firm uses AI during audits for tasks including extracting data from invoices and to identify data anomalies in, for example, general ledger entries, for further investigation by audit staff.

One anomaly could be if a chief financial officer (CFO) posts an entry in the client’s accounts. Senior people don’t usually post entries but would review them, Campbell says. “CFOs generally shouldn’t have access to post entries. An entry posted by the CFO could be interpreted as a higher risk of fraud.”

The AI technology has given KPMG “better insights” during audits, improving their quality and efficiency, according to Campbell. It can analyse large numbers of documents in a shorter period than if they were done manually, he says. This helps improve an audit’s quality because all transactions are checked, rather than a sample if traditional audit methods were used. “The volume of transactions differs significantly across the companies we audit. The number of transactions we analyse can be hundreds or thousands or in some cases billions.”

Calculating more precise benefits of the AI used in audits is tricky, though, Campbell says, due to variations in how it’s used, the types of data it’s used on and the number of transactions it analyses. “In some cases, for example the overall effort on the audit could increase if more anomalies were flagged for follow up,” he says. “This is the same for the use of any technology in audit.”

Kirstin Gillon, Technical Manager of ICAEW’s IT Faculty and author of its 2018 report, Artificial Intelligence and the Future of Accountancy, says that within accountancy, AI is probably most widely used in audit. “AI use builds on recent investments in data analytics and greater emphasis on looking at all the data, rather than samples,” she says.

AI is also being used for tax and business advisory services – in particular by the Big Four accounting firms. EY has created its own AI tool (EY Document Intelligence) which uses natural language processing to improve the speed and accuracy of interpreting large numbers of contracts, allowing EY audit teams to “focus on higher value activities”.

The firm is also using AI − speech recognition, natural language processing and machine learning – to analyse recordings of clients’ call-centre transactions. EY says the AI helps companies improve their understanding of why customers leave, resolve customer problems quicker and understand why call-centre staff are not reading customers’ financial disclosure statements needed to comply with financial regulations. Deloitte has developed AI to analyse thousands of tax cases of the European Court of Justice, summarise them and predict whether a tax case is likely to be successful in court.

Over the next 10 years, AI will automate much more audit work, KPMG’s Campbell says. It will be able to identify high-risk areas of an audit on its own, rather than having to be pre-programmed by humans about what risks to look for, he says. Other likely improvements in AI will include its ability to analyse unstructured data – such as in emails and voice calls and “sentiment analysis” (analysing emotions and feelings).

Currently AI is better equipped to analyse structured data such as Excel spreadsheets. For example, AI could judge whether the front half of an organisation’s annual report and press releases is consistent with the reality of the numbers in the back half.

Becky Shields, Partner at Moore Kingston Smith (MKS), is another early adopter of AI. MKS has been using MindBridge’s AI for three years in its audits, for tasks including sampling client transactions such as sales, invoices and purchases.

By scanning transactions for potentially suspicious activity, Shields says the software gives her a “fighting chance” of spotting fraud. During one audit the software spotted that one transaction was done over a weekend, which is normally considered unusual and something an auditor will investigate. MKS checked but nothing was untoward. “Our AI helps us to give a more in-depth service,” Shields says. “Without our AI service we might have missed the weekend transaction.”

Automating some audit work has helped audit staff “use their time more wisely”, Shields says.
MKS plans to use AI in real time to analyse a data lake of client transactions − with clients’ permission. “We probably wouldn’t get value from this on a daily basis but could get value on a monthly basis, [such as for] fraud indicators,” she says.

When AI doesn’t work

In 2018, BlackRock, an investment management company, reportedly suspended AI-based liquidity-risk models because the quantitative analysts who had developed them could not explain the models to their bosses. Other problems with corporate AI are about unintended consequences.

In 2016, Microsoft withdrew an AI-powered chatbot, called Tay, after Twitter users got it to swear and make racist comments. At the time, Microsoft said that it was “deeply sorry” for the unintended offensive and hurtful tweets from Tay, which did not represent the company, its values or how Tay was designed.

The financial services industry is one of the biggest investors in AI. And it has found some early uses of AI have not gone according to plan. Samathur Li Kin-kan, a Hong Kong investor, hoped that a supercomputer (K1) using AI technology would improve his investment returns.

K1 trawled online sources including real-time news and social media to assess investor sentiment and make investment predictions. It then sent instructions to a broker to execute trades, adapting its investment strategy as it learnt. However, the computer regularly lost money in trades, including up to $20m (£15.3m) in a single day, according to a Bloomberg report.

Even criminals have begun to use AI. Last year, fraudsters used AI software to trick the CEO of a UK-based energy company into transferring €220,000 (£188,000) into a bank account.

Criminals used AI to impersonate the voice of the UK CEO’s boss. The caller said the request was urgent and told the executive to pay within an hour.

We want to encourage wider debate about the long-term opportunities and challenges for the profession that AI poses. You might also like to read our other related articles:

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250