ICAEW.com works better with JavaScript enabled.

AI Assurance Conference

19 May 2025, Chartered Accountants’ Hall

Join our inaugural AI Assurance Conference bringing together key industry players including assurance providers, businesses, policy makers, and academics to explore the role of AI Assurance in promoting responsible AI adoption and innovation.

Book now

Highlights

star

Build your knowledge

Explore the real world meaning of AI Assurance, including misconceptions and expectation gaps

star

Expand your network

Engage with policy makers, regulators, assurance providers and fellow accountants in shaping the future of AI Assurance.

star

Gain practical insights

Hear practical advice and tips from experts on providing and getting AI Assurance services, and on auditing clients who are using AI.

star

Navigate Complexities

Discover the challenges associated with assuring foundation models and potential ways to address them.

star

Skills development

Participate in discussions about the types of skills needed for AI Assurance professionals, and how to develop them.

Programme

Please note that the programme is subject to change.

Registration
09:00
Welcome
09:30
Opening keynote
09:35
What is AI Assurance?
The term "AI Assurance" is used in many ways to mean different things, often leading to confusion and miscommunication. This session will provide a brief introduction to the topic, followed by a panel discussion with key players discussing their practical experience of providing and buying assurance, including their understanding of assurance, scope of work, limitations, and expectation gaps.
09:45
Practical case studies - assurance as an enabler of responsible adoption
The potential for AI to improve efficiency and accelerate business growth is well recognised. However, adoption has been slowed down by concerns around the risks presented by AI including lack of transparency and explainability of models, bias, and accountability. This session will discuss current assurance mechanisms, and to what extent they can demonstrate mitigation of these risks, including hearing from two businesses who have used AI Assurance to accelerate their organisation's responsible adoption.
10:30
Break
11:15
Debate - regulation as a driver for demand
Is regulation necessary to facilitate AI Assurance and responsible adoption? This session will debate the pros and cons of regulation as a driver for demand of assurance services, considering the EU, UK and US environments, with contributors arguing both for and against regulation.
11:35
Getting started with AI Assurance
The AI Assurance industry presents new business opportunities, with the UK Department for Science Innovation and Technology has projected that the UK assurance market, could move beyond £6.53bn by 2035. Accounting firms are already becoming key players in this area. This session will discuss considerations for accountants and businesses looking to expand their services into this new area, with recommendations on how to get started.
12:15
Lunch
12:45
Preparing for EU AI Act Conformity Assessments
The EU AI Act requires conformity assessments for High-Risk AI Systems (HRAIS). This session will bring together businesses, consultants and assurers to discuss requirements for businesses, challenges identified, and lessons learned so far in preparing for these assessments.
13:45
Financial Audits of clients using AI
More and more audit clients are beginning to incorporate Generative AI into internal processes, including in financial reporting. The complexity of Generative AI can present risks that challenge the way auditors have traditionally conducted audits. This session will use a couple of case studies to explore the considerations for auditing a client who has embedded generative AI into their financial processes, and the types of activities that may be considered as part of the audit.
14:15
Break
15:00
Getting assurance over foundation models
Third party foundation models underpin the majority of AI tools used by businesses. However, getting assurance over such models can be difficult, with challenges including not having access to necessary information. This session will explore options for assurance of such models, including how to address challenges, the work of the newly renamed UK AI Security Institute, and the potential to leverage third party reports
15:20
Developing the skills for AI Assurance
With so many different types of AI assurance activities, what are the vital skills for AI Assurance? How can the UK build such skills and what should business be doing? This panel will discuss these topics, as well as the practical considerations for the development of an AI Assurance profession.
16:00
Closing remarks
16:40
Networking drinks
16:50
Event close
18:00

Our speakers

More speakers to be announced

Emily Campbell-Ratcliffe
Emily Campbell-Ratcliffe Head of AI Adoption, Governance and Skills, Department for Science, Innovation and Technology

Emily works to support the implementation of DSIT’s AI Opportunities Action Plan and unlock the opportunities of AI across the economy, whilst ensuring any associated risks are managed through the development of an ethical, trustworthy, and effective AI assurance ecosystem – a key pillar of the UK’s AI governance framework. She represents the UK at the OECD’s Working Party on AI Governance, and is also part of the OECD.AI network of experts, contributing to their Expert Groups on AI Risk & Accountability and Compute & Climate. Last year Emily was named as one of the Government AI 100.

Tim Gordon headshot
Tim Gordon Co-founder, Best Practice AI

Tim co-founded Best Practice AI to educate and advise on AI strategy and governance. BPAI published the world’s largest directory of AI use cases, have worked with the World Economic Forum on their first AI Board Leadership Toolkit and co-produced the world’s first AI Explainability Statements under GDPR. Tim also co-founded Salus AI which uses AI for sales compliance checking. He has worked at the Boston Consulting Group, served on the board of a private equity-backed business, advised the Deputy Prime Minister of the UK and is a Trustee at Full Fact.

Profile image of Esther Mallowah
Esther Mallowah Head of Tech Policy, ICAEW

Esther influences Technology policy to enable businesses, accountants and society to harness its benefits while limiting harm. Prior to joining ICAEW, she worked in technology internal and external audit roles, focussing on information and cyber security and operational resilience. She is also a qualified ICAEW Chartered Accountant, qualifying whilst at Deloitte.

Pauline Norstrom
Pauline Norstrom Founder and CEO, Anekanta®AI and Anekanta®Consulting

Pauline is the CEO of Anekanta®AI and Anekanta®Consulting, which provide AI strategy, literacy and governance services aligned with the requirements of the EU AI Act and international pending regulations. The Company operates across sectors including facilities management, security, transportation, aviation, retail, and education. She previously held executive board roles in large international industrial video and high-risk AI technology companies. As leader in Responsible AI Pauline has driven good practice for over 20 years; recently focused on facial recognition guidance, standards and regulation, also AI policy and governance frameworks for Directors.

Our partners

Partners to be announced

See all our events

ICAEW runs a variety of events in support of financial professionals in business and practice. From full-day conferences offering the latest updates for specific sectors to webinars offering support on technical areas and communication skills. There are hundreds of learning opportunities available.

Browse all ICAEW events ICAEW's flagship events