Many of us have used artificial intelligence (AI) for years, from email auto-complete to chatbots on a software helpdesk. It’s there, just at subtle, non-threatening touchpoints. For the accountancy profession, the benefits are clear: hand over the laborious, repetitive tasks and free practitioners to use their skills to address more meaningful, analytical and (frankly) profitable aspects of the job.
When it comes to the benefits of AI, Steve Cox, Chief Evangelist at software company IRIS, points to last year’s Patisserie Valerie accounting scandal, eventually totalling roughly £94m. If AI had been used, the discrepancies would have been detected far sooner, he believes.
AI could also be used to find cyber attackers sooner. Across Europe, the Middle East and Africa, the overall dwell time – the number of days an attacker is present on a victim’s network before detection – remains largely static on the previous year, 177 days in 2018 and 175 days in 2017, according to cyber security consultancy FireEye’s M-Trends 2019 report. Catching symptoms early improves the chances of tackling problems, which is where AI comes in.
However, there are two key areas where the use of AI in cyber security poses fresh challenges, according to Richard Anning, Head of the Tech Faculty at ICAEW. The first is around the vast increase in the level of resources required, in terms of personnel with relevant expertise but also the requisite training data. In order for AI algorithms to learn effectively, they need access to lots of existing malware code which, Anning says, presents a large drawback given the financial and operational investment necessary.
The other issue is that just as accountancy firms and their clients are preparing for greater adoption of AI in cyber security, so too are cyber criminals. Anning notes: “While companies are using AI to help defend themselves against cyberattacks, the cyber criminals are using AI to get around AI defences.”
In the same way traditional cybersecurity threats have witnessed a widening gap in favour of the criminals as they become one step ahead, that gap will inevitably widen further and faster with the rising use of AI. The reasons are rooted in the very nature of AI.
As AI uses data, keeping it safe should be a priority. “The key risks when using AI or machine learning tools in the deal environment are around data security,” says Stephen Bates, partner at KPMG. The tools capture sensitive data that may fall under various privacy laws across multiple borders, which then have to be carefully managed by the provider, the AI company and the accounting firm.
“Accountancy firms need to properly understand the data capture and storage process, the initial and ongoing use of the data, and ownership and protection of that data – especially that it meets legal and regulatory requirements,” Bates says.
With M&A data, hackers might be looking to benefit from investment decisions based on intelligence about a looming deal, for example, while in the private client tax division, exposing financial details of any high-net-worth-individuals is often of interest to the media. For accountants handling the victims, the reputational damage from a breach may turn out to be irreparable.
For a profession so attuned to rigour and validation, order and process, having to accept the black box opacity aspect of AI presents some (potentially legal) challenges, because AI’s scope for achieving outcomes is enormous.
Yet, given that AI does not produce the data – which may be cloud-based – technically it should not introduce any new risk, Cox explains. But if an AI tool was used that clones the data, that brings a whole new arena of possible risks, he adds. However: “Most vendors will use platforms such as Amazon Web Services, Microsoft Azure or IBM, which put security at the heart of everything that they do. Data is not going to be a major issue,” he says.
In using algorithms designed to learn and evolve, one issue around AI lies in one’s internal risk management, according to Mazars. Asam Malik, Mazars’ UK Head of Technology Consulting and Assurance, believes a firm’s ability to audit its AI poses a governance risk: “AI learns and evolves based on patterns, it brings a more subjective nature, which is where the nervousness comes in.” Yet Malik caveats this by saying: “With any new technology, there will always be nervousness around how we manage, audit and understand it. But you develop techniques to manage that risk.”
Securing security expertise
Accountants are not asking why AI should be used, but why it shouldn’t. Clients are getting younger and more digitally focused, and expect a technology- first approach. AI can also be positioned as a source of competitive advantage.
Smith & Williamson business tax team partner Tom Shave recommends recruiting beyond the traditional professional services universe and is starting to see demand for technology expertise grow.
However, one challenge is where cyber security professionals sit within the firm – in specific business units or centrally for all departments to draw upon their expertise?
“You may need to get those professionals closer to the people using the data. If you’re sitting in a separate IT function, you will be slightly removed from your average graduate consultant using that data, and it could be harder to spot the risks,” Shave says.
Generative AI guide
Explore the possibilities of generative AI in accounting.