What sort of framework does the UK need to drive safety in artificial intelligence (AI), while ensuring that businesses can innovate with the technology at a competitive pace?
That was a key question explored in a recent talk at Chartered Accountants’ Hall. Held in early March, the round-table event brought together an impressive roster of ICAEW members, each of whom has witnessed the rise of AI first hand. While some work directly in the accounting profession, others have taken their skills into enterprise.
Chaired by ICAEW President Malcolm Bacchus, the event welcomed as special guest Shadow Secretary of State for Science, Innovation and Technology Alan Mak MP, who committed to being “on listening mode” as the discussion unfolded.
The members’ thoughts on issues around regulatory design for AI will no doubt have given him much to consider.
Multiple layers
First to delve into the topic of regulation was Becky Shields, Partner and Head of the Data Analytics and AI team at Moore Kingston Smith. Shields pointed out that even as a mid-tier organisation, her firm faces multiple layers of oversight. While its primary work is regulated by the main accountancy bodies, it also has a legal services practice covered by the Solicitors Regulation Authority and financial advisers overseen by the Financial Conduct Authority.
As such, she said: “Our biggest concerns are regulatory conflict and duplication of effort. Thinking about our start-up clients, it’s quite hard now to be entirely domestic. Many of them will be dealing with multiple layers of regulation across different territories. If UK plc wants to reap the benefits of AI, regulation must not impede innovation.”
Similar concerns were expressed by Jo Muncaster, Head of Finance at AI-enablement specialists digiLab. Although the company’s platform is sector-agnostic, each industry it works with has its own regulators. “For them to assess whether our tools are compliant requires us to open up our tech stack to a level of transparency that makes it very hard to protect intellectual property,” she said. “Over-regulation of AI could make matters worse, and undermine competitive advantage.”
Glenn Fletcher, CEO of industrial sensors manufacturer Tribosonics, said that rather than bring in new and complex regulation, the government should encourage greater compliance with relevant ISO Standards. For example, 42001 and the in-development 27090 set a number of AI benchmarks. In Fletcher’s view, ISO alignment would be much simpler for business to manage than statutory requirements. “If you’re dealing with large customers who’ll only work with you if you’re certified, that will provide a natural control,” he said.
However, Shields pointed out that some certifications can be onerous – particularly for small entities, for which the approval process “can take months”.
With that in mind, Pippa Goodall, Head of Finance at retail-focused AI solutions provider Edgify, asked: “Is there a middle ground, where enterprises can somehow still be safe, but keep the agility they need to move forward?”
For Fletcher, the answer is for companies to draw up a time-scaled roadmap towards certification. “If you’ve got a roadmap to show you’re on that journey, that will impress your clients and help your team.”
Rigorous guidance
PwC Partner and Global Head of AI Trust Leigh Bates highlighted respected best-practice initiatives in the cyber arena, such as the US Cybersecurity Framework and the UK Cyber Essentials scheme. He suggested that perhaps UK stakeholders could collaborate on developing a similar suite of ‘AI essentials’ to affirm confidence in the fast-growing sector.
“We need to show that we’re consistently applying trust by design, and incorporating the right ethical principles into the development and deployment of AI use cases at scale,” Bates said. “What guardrails, testing and monitoring procedures are needed to secure greater confidence in AI-enabled applications?”
Forvis Mazars’ Director of Innovation and Digital Skills Robbie White noted that one major driver behind the success of Cyber Essentials is that adoption is now expected of companies that wish to work with the public sector. In White’s view, if an ‘AI Essentials’ scheme should go ahead, it would be important for stakeholders to look at how to set minimum standards that are not too onerous, but still provide rigorous guidance.
“We would need to give companies that guidance sooner rather than later,” he said. “That way, they can plot a path towards applying certain technologies in, say, three years’ time, and work on upskilling their teams accordingly.”
For Deloitte Partner, Algorithm and AI Assurance, Mark Cankett, the question of interoperability between rulesets in different jurisdictions is every bit as important as that of how UK-specific regulation should work. “At some point, UK companies will be exposing EU residents to their AI,” he said. “Therefore, the extraterritorial nature of the EU AI Act will come into play. We must recognise that, as they grow, UK companies will go on a journey of being more exposed to overseas codes and frameworks.”
Prime opportunity
In tandem with regulation, another critical force in the safety debate is AI assurance. For Bates, the term requires greater clarity that will help it – and its various steps – to be more broadly understood. He pointed to the Singapore Government’s AI Verify framework as one example of how a prominent territory is working to standardise AI governance, testing and validation to inform AI assurance. “It’s difficult to reach full transparency and explainability of AI models,” he said. “But with robust testing, we can get to a degree of comfort that AI use cases are operating within risk tolerance against responsible AI, trust and ethical principles.”
“There’s clearly a need for AI assurance and certification – not just in the UK, but globally,” said ICAEW Head of Tech Policy Esther Mallowah. “However, not enough people are focusing seriously on it right now. For me, this presents a prime opportunity for the UK to lead. Getting assurance right would also encourage domestic demand and adoption.”
Rounding off the discussion, Bacchus said: “Regulators need to look at risks and rewards in both the short and long-term, rather than simply regulating for the situation that exists today.” He suggested looking at a type of regulation akin to the food industry’s five-star system. “It’s a light-touch system that adjusts itself without much intervention at all. In AI, we need a similar sort of graduated framework of certification that will provide assurance, but won’t stifle innovation.”
Coming up…
For further insights on these areas of the AI debate, join ICAEW’s inaugural AI Assurance Conference at Chartered Accountants’ Hall on Monday 19 May.