Securing the trust of stakeholders and the confidence of staff when deploying artificial intelligence (AI) technologies is vital for the success of an organisation. However, neither can be achieved without effective governance, argues David Gomez, ICAEW’s Senior Lead, Ethics, who has hosted a series of round tables to explore the ethical use of AI with practitioners and experts.
“The groups that shared their experiences of AI with us highlighted the danger of a lack of clarity in ownership and governance roles,” he confirms. “They emphasised that accountability and responsibility within organisations must be clearly mapped out. Organisations exploring AI tools need to have clear policies, procedures and processes in place first.”
Having such documentation in place will help to:
- promote compliance with regulations and standards;
- build confidence;
- ensure confidentiality;
- promote quality and consistency;
- engender trust;
- promote the ethical use of AI tools; and
- promote an ethical culture within the organisation.
Several organisations represented at the round tables had already developed and publicly shared specific business-use rules and responsible-use frameworks applicable to their networks. One participant revealed that every AI use case at their organisation must pass two tests:
- Can we? Is the proposed use lawful and do we have the required skills and competences?
- Should we? Does the proposed use align with our corporate values and ethics?
One recommended framework for AI governance is the ‘three-pillars’ approach, which focuses on three guiding principles: transparency; accountability of decision-making; and fairness and non-discrimination.
Roles and responsibilities
The development and deployment of an AI system often requires multiple organisations, each potentially playing several different roles. As one round-table attendee said: “Clearly defining and mapping these roles from the outset is vitally important – everything else flows from there.”
Groups agree on the value of creating a dedicated AI oversight role within organisations, similar to that of the data protection officer. This role would ensure the proper and ethical implementation and use of AI. However, given the significant potential risks associated with AI, some participants advocated for director-level responsibility in this area, emphasising the need for high-level accountability and strategic oversight.
Alongside this oversight role, it is important to ensure appropriate expertise exists at board level and at key decision-making points. This expertise should encompass AI and AI assurance, statistical analysis and ethics. Such knowledge will enable the organisation to determine suitable confidence levels for AI outputs and ensure adequate provisions for service continuity and upgrades, including processes to detect ‘model drift’ over time. This comprehensive approach helps maintain the reliability and effectiveness of AI systems throughout their lifecycle.
The internal audit function also has a key role to play in relation to assurance of AI, including:
- undertaking compliance assessments of AI systems and model assurance;
- completing conformity assessments;
- scrutinising processes for procuring AI systems; and
- requiring independent ISAE 3402 reports of vendors.
Some of our round-table participants suggested that the internal audit function had a particular responsibility to educate the board about potential AI risks and evaluating the practical implementation of policies and frameworks.
Others highlighted the importance of internal ethics and risks committees overseeing AI processes within accountancy firms. They stressed the need for senior partner involvement in these committees or, at a minimum, ensuring their visibility of the oversight work.
Gomez notes that the principle of accountability includes the notion of contestability and redress. Given how quickly AI tools can spread misinformation and industrialised errors, some stakeholders may prioritise swift corrections over financial compensation. Gomez believes this reflects the essence of professionalism: when mistakes occur, true professionals acknowledge their responsibility and take action to rectify the situation.
Risk management
Gomez advocates for mapping out risk and accountability across the entire supply chain, including for each individual component. The Information Commissioner’s Office has a suite of guidance and an AI and Data Protection Risk Toolkit, which adopts an area-by-area and life-cycle approach that could prove useful for organisations starting out on their journey with AI.
At the roundtables, the participants shared diverse approaches to AI risk management. One firm advises clients to maintain:
- a repository of risks and risk themes; and
- a taxonomy of ethical dilemmas, including issues such as privacy and its impact on employer/employee relationships.
Another organisation has compiled a risk register of AI and generative AI risks, which it uses to develop a series of granular business rules, governing:
- the use of AI tools;
- to whom the AI tools are made available;
- which persons have authority to use AI; and
- the purposes and projects on which AI tools can be used.
The same firm has also introduced an annual training programme on AI and the ‘AI business rules’, as well as dedicated support teams to talk through new use cases for AI proposed by employees. The business rules are assessed every quarter to ensure they remain fit for purpose.
Read more on AI and ethics
This article is based on extracts from the comprehensive report on ICAEW’s AI and Trust Roundtables.
Global Ethics Day
Global Ethics Day 2024 focused on using the power of ethics to build a better world. ICAEW examined the vital role ethics must play in the use of technology in accountancy.