In 2022, postdoctoral researcher at Harvard Fabrizio Dell’Acqua put together an experiment to determine the engagement levels of human beings when using AI. Taking 181 recruiters reviewing 44 CVs, he gave a random subset within the group some algorithmic recommendations about candidates, to varying degrees of quality. Those using the higher quality AI models were less accurate in their assessment of applicants compared to those with lower quality AI.
“On average, recruiters receiving lower quality AI exerted more effort and spent more time evaluating the resumes, and were less likely to automatically select the AI-recommended candidate,” he wrote. “The recruiters collaborating with low-quality AI learned to interact better with their assigned AI and improved their performance.”
This revelation – that higher quality AI results in lower human performance, highlights the need to strike a balance when designing AI models.
As AI performance improves, the humans using it have a greater incentive to delegate efficiency and accuracy to the machine. If the AI is perceived to be too high quality, workers are at risk of “falling asleep at the wheel” and mindlessly following its recommendations, Dell’Acqua writes. “In such settings, maximising combined human/AI performance requires trading off the quality of AI against the potential adverse impact on human effort.”
This poses an ethical dilemma for finance teams. Over-reliance on AI is a potential danger in all fields of work, but with financial and regulatory reporting there are prescriptive rules to be met and no tolerance for errors. It is imperative that finance professionals do not allow themselves a misplaced sense of assurance when using AI to ensure high standards are maintained.
While Dell’Acqua’s recommendations for using less-reliable AI is not a credible option where accuracy is critical, firms must nevertheless put control frameworks in place to ensure that AI-generated work is properly validated.
“While AI offers incredible benefits, it can’t replace professional judgement and scepticism,” says Shaun Taylor, CFO Americas for Standard Chartered Bank. “So when finance teams adopt AI technology, they need to set clear boundaries, guidelines and procedures on both the underlying data and the output.”
To mitigate these risks, finance professionals must maintain the culture of ‘explain and verify’, utilising basic audit and accounting controls such as reconciliations and the validation of transactional data to invoices. “I don’t see how AI could replace these checks and balances,” says Taylor.
This must be driven from the top and remain an integral part of the internal challenge process, particularly those involving AI input. Guardrails need to be in place with space for these to develop as the technology – and regulations around it – advances.
In sectors such as financial services, greater alignment on principles for AI usage are necessary, this is particularly important for business operating across multiple geographies and regulations. Taylor suggests the best approach will be to expand existing control and governance frameworks, rather than creating a standalone for AI.
“CFOs ultimately retain accountability for our Financial Reporting,” he says. “While I’m excited at the near-term benefits of incorporating AI into our workplace, I’m also mindful that over time we do not allow our skills to erode through an over-reliance on AI-generated results.”
Accounting Intelligence
This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.