ICAEW.com works better with JavaScript enabled.
Exclusive

Artificial Intelligence and radical uncertainty

Author:

Published: 02 Jan 2025

Exclusive content
Access to our exclusive resources is for specific groups of students, users, subscribers and members.
Most human beings are fundamentally anxious about uncertainty. The adoption of AI has created significant uncertainty – to the extent many refer to it as the 4th industrial revolution.

It is bringing technological transformation at a speed and scale that has not been seen before, all in the context of geopolitical shifts and risks not experienced on a global scale for many years. We can all agree that we are living through interesting times!

Artificial Intelligence is an umbrella term for technologies that perform tasks that previously required humans. The “A” stands for artificial - meaning it’s man-made and not naturally occurring. It’s intelligent because it takes vast quantities of data, finds patterns, and uses this to form predictions about the future. The term dates back to the 1950s. It’s been in your house and your car for some time. For example, anything that makes predictions, such as Netflix highlighting programmes you might enjoy, is using an AI algorithm. We might think of AI as an increasingly accurate prediction generator, bringing about radical change to the workplace.

The release of Chat GPT brought AI centrally into the public’s zeitgeist. It’s now accessible and being used for a huge range of purposes. Within your organisation, AI is likely used in finance, HR, risk, operational efficiency, product development, customer intelligence and many other areas. It improves human performance in virtually every instance where it is used. The change is profound and, whilst some predict a correction in the current hype, the upward trajectory in uptake remains clear.

The good news is that there is evidence that governance over AI is finding its way into the boardroom as a prioritised topic. A recent Deloitte study found that 57% of 400 board members interviewed said that AI was now a topic of discussion for their board. Most of the concerns discussed related to the impact on customers and employees.

For internal audit functions, it is critical to break down the risks so that they can be assessed and evaluated, like any other risk category. To get our arms around uncertainty we need to create some structure.

Breaking down the risks

Clearly, there is a role for technology specialists to scrutinise platforms and software, checking they are coded and operate as intended. However, all internal audit functions have the capabilities to consider the risks and the impact of AI on humans. Organisations from the OECD to the Algorithmic Justice League have highlighted concerns over the potential risks and damages to individuals.

Three key risks are:

  1. Bias and discrimination: there have been reported instances of companies using facial recognition that has been inappropriately set up to favour specific characteristics – most commonly the majority while male characteristic, causing embarrassment to the organisations.
  2. Deepfakes: propagation of information that is false or misleading and which iteratively becomes self-reinforcing, eroding trust in businesses and institutions.
  3. Dependency: leaders rely on the outcomes of AI generated information to make decisions, without full access to the original data or assumptions embedded in the algorithms. In the UK, we experienced this during covid when the results of public exams were moderated using AI.

The ‘Four Ds’ framework

To mitigate these risks, consider the following framework:

  • Data management: evaluate how is data structured and managed as an input and output of AI technology. Check for robust governance policies to address data quality and access.
  • Design parameters: examine the assumptions and questions that are embedded within AI algorithms. Check for transparency around how these parameters align with organisational values.
  • Deployment of the technology: assess who developed the AI technology, using what standards and procedures. Check that procurement processes include due diligence of third-party developers.
  • Drift: understand the cumulative effects of iterative learning. Check that monitoring is designed to detect deviations from intended outcomes.
    Focussing on these risks fosters fairness, wellbeing and trust. It avoids diminishing human capabilities, autonomy and responsibility. Ultimately, the goal is to enable culturally centred transformation where technology serves humans, not the other way around.

Practical guidance for internal auditors

To bring this framework to life, here are practical steps internal auditors can take:

  • Establish an AI risk assessment process: work with technology teams to create an AI-specific risk register, incorporating inputs from across the business.
  • Audit data integrity: test data inputs for completeness and accuracy and evaluate how data governance policies are enforced.
  • Engage cross-functional stakeholders: collaborate with HR, ethics committees, and operational teams to ensure AI aligns with organisational values.
  • Monitor and adapt: build periodic reviews into audit plans to assess AI system performance and its evolving risks.

The bigger picture

AI does not exist in a vacuum. Its adoption is shaped by economic, geopolitical, and regulatory developments. For example, the EU’s proposed AI Act will introduce stringent requirements for high-risk AI systems. Internal auditors should be aware of these external factors and incorporate them into their risk assessments. Additionally, AI’s impact on labour markets, particularly through automation, has broader societal implications that organisations must address responsibly.

Conclusion

AI brings radical uncertainty but also immense opportunity. Internal auditors have a vital role in navigating these complexities, ensuring organisations harness the power of AI responsibly and ethically. By breaking down risks, implementing the Four Ds framework, and staying attuned to the broader context, internal audit can help organisations thrive in the AI-driven future.