ICAEW.com works better with JavaScript enabled.

Where are we now with AI assurance?

Author: ICAEW Insights

Published: 27 Feb 2025

The artificial intelligence assurance market is still in flux. Which forces are set to determine what sort of shape this nascent market will take?

According to a recent report from the Department of Science, Innovation and Technology (DSIT), there are an estimated 524 firms currently supplying artificial intelligence (AI) assurance goods and services in the UK, 84 of which are specialised AI assurance companies. These companies are generating estimated revenues of just over £1bn.

This is a bigger AI assurance market than those in the US, Germany and France, relative to economic activity, and DSIT reports a considerable possibility for growth, with the potential for the market to exceed £6.53bn by 2035. 

For accountants, this represents a considerable opportunity, but that potential for growth is only part of the story. At an ICAEW round table in 2023, attendees discussed critical foundations for quality AI assurance, including clear foundational and process standards, an understanding of the risks involved with such a rapidly developing technology, and certified skills.

By any gauge, then, this is a market with huge potential and several unknowns. So which forces are likely to determine how AI assurance will settle into a more cohesive shape and develop its own, routine conventions?

Fundamental start line

According to Tim Gordon – Co-founder and Partner at specialist consultancy Best Practice AI – the “main game in town” will be the long-awaited EU AI Act, which Brussels will begin to implement in early 2025. “The challenge for the UK is that we are now outside the EU,” he says, “but UK companies that use AI will have to comply with the Act so they can do business in Europe. Any AI start-ups that want to go global, which is certainly something they should aspire to, will also need to comply. And if you listen to the mood music from some of the big US banks, one point they’re making is that they’ll have to apply the EU AI Act globally. Now, that’s not official US policy. But it’s a sign of how much the proposed Act matters.” 

One impact of the Paris AI Action Summit, held since this conversation, has been to change the dynamic of the European conversation away from aggressive implementation of new AI regulation.

However, based on the text of the Act, Gordon points out, a key area of focus is likely to be transparency. In other words, ensuring that AI developers, and those who use AI tools in their business models, are able to explain how their system behaves, what purpose it aims to serve, where its body of supporting data comes from and what sort of governance and safety procedures sit around it.

“That will be the fundamental start line for any organisation that needs to think about AI,” Gordon says. “And ultimately, when the audit industry gets properly up and running on this and starts doing AI assurance at scale, that’s what it will have to focus on. Professionals will need to go through an understanding with audited entities of whether they’ve really thought through a number of factors that contribute to the performance of their AI systems. That would include data sources, technical approach, training methodology, governance, stakeholders and people – not to mention bias and ethical implications.”

 

Builders and buyers

In Gordon’s assessment, domestic AI regulation could emerge sometime towards the end of 2025. As any UK AI Act would build on the work of the AI Security Institute – part of the DSIT – Gordon says it is likely to focus on so-called ‘frontier models’: advanced, large-scale systems with potential impacts on national security. That is likely to set a general tone for rulesets that will affect the assurance industry, even if the nature of these models means that not many will be in operation – at least initially. However, he notes, that foundational Act would probably be followed by a series of rulesets for different verticals, such as healthcare and financial services. At that stage, assurance work would increase.

“There will be two sets of people under scrutiny here,” Gordon says. “First, those who are building the tools, and second, those who are buying them and embedding them into their business models. So, the builders will be required to demonstrate that they are developing their systems in line with regulations, and the buyers will have to show that they are doing the right things on the implementation side. Auditors will be looking to understand: ‘Okay, so you’ve built this AI platform inside your company, and you’ve used these three suppliers – are they all working in compliance with the relevant rules?’”

For Gordon, that will start to pin down areas of AI assurance that are currently up in the air. “Right now,” he points out, “there are probably more assurers trying to work out what they should be doing on assuring AI tools than doing it. Part of the challenge is, ‘What are we actually assuring for or against?’ And, more broadly, we’re not yet in an economic position whereby, for example, if you haven’t got AI assurance, you can’t get insured.” 

Competitive advantage

Alongside the clarifying effects of regulation, Gordon says, an equally important factor in defining the shape of AI assurance will be – as the DSIT report suggests – education. AI tools, he notes, are often built by teams that are working at a rapid pace and not necessarily fully documenting what they are doing. 

Not only are many frontier models effectively black boxes, in terms of how they create specific outcomes – they are also stochastic. This means that, unlike a traditional IT system designed to produce set outputs, the generative nature of these models can produce different outcomes from similar inputs. 

Assurers will therefore need to be thoroughly trained in AI’s uniquely dynamic nature. “In DSIT’s assessment, the way forward is to accredit institutions or certify individuals as AI Assurance Professionals.”

That, though, has its own hurdles. “The UK sees AI assurance as a potential hook for global competitive advantage,” Gordon notes. “But how do we turn training and accreditation into a corporate story we can sell to the world? And what are we even measuring people against?”

One route, he suggests, could be to create certification and accreditation for skills that are linked to standards for AI products. This would include assurance. 

DSIT plans to drive AI assurance market growth by developing an array of resources hosted on an AI Assurance Platform, including an AI Essentials Toolkit, to help raise awareness and drive demand when it comes to AI Assurance services. It is developing a “roadmap to trusted AI assurance” to increase standards and quality.

The Trump administration seems to be taking a very light touch when it comes to AI regulation. Gordon says that this may interfere with the UK’s attempts to set standards, particularly if US states set their own standards. “Individual states are already looking at how to create sets of rules for how people can use AI in a range of different contexts, from HR to insurance. It may be difficult for a UK kitemark to establish a presence against that crowded backdrop.”

Despite that risk, the UK is in a good position to lead the way when it comes to AI assurance, and accountants could have a critical role to play. ICAEW will be holding its inaugural AI Assurance Conference on 19 May at Chartered Accountants’ Hall, exploring regulation, skills development, and considerations for AI assurance. 

You may also be interested in

Resources
Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.
Technology

Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.

Read more
Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250