Practitioners from across industry and practice have been sharing their first-hand experiences of artificial intelligence (AI) adoption, to support ICAEW in creating guidance on the ethical use of AI.
At round-table events held over the summer, accountants were joined by academics to discuss the realities of using AI in accountancy. Alongside sharing case studies, the participants were honest about the key risks of using the technology. Importantly, they also proposed potential strategies to mitigate these risks, providing valuable insights for the profession.
A central topic of discussion was the challenge of engaging with third parties on AI – particularly developers and vendors of AI tools. Participants identified this as a pressing concern, especially for auditors.
The need for transparency
The primary question posed by auditors when seeking assurance is: “How does this work?” To be able to explain the outputs of an AI tool, you need a solid understanding of how it works. This requirement highlights a growing need for AI literacy among auditors.
In turn, developers and vendors of AI systems must be able to clearly explain to clients how their tools generate outputs. This explanation should include:
- the data used to train the AI;
- the objective function of the AI models; and
- the system’s intended purpose.
Without this level of transparency, auditors cannot place reliable assurance on the AI tool’s output as part of their audit testing. As one participant said: “If you’re not able to explain what the AI is doing, then you can’t rely on it from an audit perspective.”
Such a scenario would be unacceptable to regulators, according to our round-table participants. They were in consensus that all organisations operating within a regulated space, including third-party suppliers, had to accept a level of transparency about what AI tools and systems did, and how they did it.
However, while some third-party suppliers appear to accept this, anecdotal evidence suggests that others preferred to hide behind their intellectual property rights. Many use these rights to protect proprietary datasets and algorithms. Adding to the complexity, several leading AI labs have themselves admitted they don’t fully understand how their models work. This creates a significant practical challenge for auditors and those seeking assurance, as well as for regulators, who must balance the need for transparency with the protection of legitimate intellectual property.
Opening the ‘black box’
Companies subject to audit that are considering investing in AI must ensure their suppliers are prepared to provide sufficient transparency about their model. While full comprehension of the ‘black box’ may not always be feasible, suppliers should be ready to offer detailed information on model reliability, data integrity, bias mitigation strategies and quality assurance processes. This level of insight is crucial for auditors and regulators to assess AI’s impact on financial reporting and risk management.
One participant explained: “If your company is using an AI tool that feeds into your audited figures, you need to tell whoever’s building the tool that someone's going to ask questions about that, and they need to be prepared to answer those questions. At the very least, you need to have an upfront conversation with a third-party supplier to say: ‘Are you prepared to explain how this works to auditors and regulators?’”
Due diligence
Those at the round table discussed the best way of achieving the desired transparency through standard due diligence, such as via contractual obligations or obligations set out in a code or framework.
The group agreed that certain basic questions ought to be put to the vendors of AI systems, including:
- What are the model attributes?
- How have datasets been created?
- What data has the model been trained on?
- What is the source of the data?
- Where third-party data has been used to train the AI, has explicit consent been obtained to use that data?
- How do the algorithms work?
- Are there any known biases?
- Are there any issues likely to affect quality or confidence levels?
While procurers of AI systems should question suppliers on these issues, suppliers should also have a corresponding duty to disclose this information. This should include details on data sources, model objectives and known limitations.
Furthermore, both parties must have a clear understanding of the context and intended use of the AI tools. Before purchase and installation, a realistic assessment is vital to determine whether the AI tool can deliver against expectations and requirements, considering both its capabilities and the organisation's needs.
One firm described the technology risk-approval process it had in place, with associated policies and guidance, to ask these questions. The process includes reference to:
- security classification standards of data;
- whether the technology has been registered;
- whether personal data has been processed in accordance with General Data Protection Regulation (GDPR) requirements; and
- clarification about which parties are hosting, processing and exercising ownership of data.
Disclosing AI use to third parties
The discussion also addressed whether organisations should inform their customers about AI use. Some participants suggested that disclosure might be necessary only when AI was making a decision that might affect the customer. Others argued for greater transparency, proposing clear disclaimers such as: “AI tools have been used to produce this output; please verify and use at your own discretion.” They believed this approach could build trust.
However, it was recognised that AI implementation involves a chain of trust-based assumptions. Some participants considered that without disclosure, individuals couldn’t confidently rely on the assumptions made at each stage of this value chain.
A question of trade-offs?
Some participants considered that the extent to which to disclose AI use, and the best means of doing so, should be proportionate to the perceived risks the AI system posed to customers. One participant explained: “There is a relationship between the transparency of the AI and the robustness and quality of the output of the AI. There’s an inevitable trade-off, and that means if you have more explainable AI, it is potentially going to provide slightly less good answers.”
Such trade-offs might include:
- Data minimisation and statistical accuracy.
- Explainability and statistical accuracy.
- Producing an AI system that was ‘accurate enough’ and which avoids discrimination.
The round-table attendees agreed that each organisation must determine what trade-offs they were comfortable with and the level of risk they’re willing to accept to achieve optimal outcomes for stakeholders. This decision must be made at the procurement stage, with organisations making independent evaluations of any trade-offs as an integral part of the due diligence process.
Read more on AI and ethics
This article is based on extracts from the comprehensive report on ICAEW’s AI and Trust Roundtables.
Global Ethics Day
Global Ethics Day 2024 focused on using the power of ethics to build a better world. ICAEW examined the vital role ethics must play in the use of technology in accountancy.