In the first part of our mini-guide on how to partner effectively with artificial intelligence (AI), we heard about the factors you must consider to choose the correct partner for your finance function or accounting firm’s needs. We also heard that it is vital to review your chosen tool’s outputs before they are shared externally.
However, reviewing outputs is not just something you should do in the days or weeks when the tool is bedding in. According to RSM UK Audit Data Analytics Director Konrad Bukowski-Kruszyna, to guarantee an effective partnership with your AI, it is essential to review outputs on an ongoing basis.
There are two objectives behind ongoing reviews: having an ability to translate raw outputs into information that will make sense in an accounting or auditing context, and knowing when it is necessary to adjust the system so it is focusing on the correct matters.
Faith and reliance
Bukowski-Kruszyna cautions that, in some cases, it may be quite difficult to understand why an AI tool has produced certain outputs. As such, it is important to have in place some form of sense-checking or screening process.
He explains: “Let’s say you’ve asked the tool to analyse a stack of lease agreements or sales contracts, or standardise a large set of invoices. You will want to have the same level of faith and reliance in the AI’s outputs as you would if you were asking a junior member of staff to carry out the same work.
“Yes, you now have some software that can do in five to 10 minutes what your junior may have taken a day or two to complete. And that may enable your team to get through a greater volume of higher-value work as a result. But your professional duty to guarantee relevance and accuracy still exists.”
Bukowski-Kruszyna points out that reviewing outputs also requires accountants to harness their professional judgement, discernment and powers of interpretation. An AI tool may yield results with potentially important accounting impacts that are not immediately obvious from how those outputs are presented. So having the expertise of a trained eye to join the dots and spot meanings is a valuable asset.
For example, he explains a case where the team was looking at recalculated revenue and deferred income for a contract-based software firm. They identified that, out of thousands of contracts the company had issued over the previous two years, three had potential concerns.
All three were issued from a new branch the firm had set up in the final week of the most recent financial year. “That discovery enabled us to go to the client and say, ‘From an audit point of view, nothing needs to be materially adjusted in the financial statements, but you may want to take a look at this department over here and see if there are any problems you need to nip in the bud before something big goes wrong.’”
For Bukowski-Kruszyna, this type of outcome shows how working with AI can become a true partnership, because the capabilities of the software and the human professional are effectively supporting each other.
“It wasn’t the tool’s process that was saying, ‘Here’s an issue,’” he stresses. “It was the human auditor looking at the output and saying, ‘Hmm, this looks a bit odd.’”
So from a partnering perspective, AI is not necessarily going to do the whole job for you. It will steer you in the direction you may want to go, but you will still need to do some legwork and have probing conversations with clients to get to the nub of any issues they may be having. “After all, that’s what clients expect when they’re using professional services.”
Evolutionary process
In Bukowski-Kruszyna’s assessment, the better people get at using an AI tool, the better they will be able to provide informed feedback that will help to nudge it towards more effectively meeting their needs.
“It’s an evolutionary process on both sides,” he says. “As with any tech implementation, there’s a change management phase when you introduce something that feels perhaps a bit scary. But once people get over that initial hump, they start to say, ‘Oh, this is making my life easier – and I have a much better understanding of how it works.’
“After a few months, you start receiving feedback from teams along the lines of, ‘OK – we get that it can do this… but could you update it so it can also do that?’ Or, ‘Client X is really impressed by what we’ve presented – but could we drill down even further into this or that data type?’”
At RSM, Bukowski-Kruszyna’s team regularly touches base with partners and managers across departments to talk about upcoming client work and how the firm’s in-house and third party tools could be deployed on it. His team is particularly keen to find out whether any parts of the workload merit bespoke solutions.
“Those talks really get to the heart of whether our tools are doing what our people need them to,” he says. “We also have a great quality review team and a strong, national audit technical team that we work with when developing these processes, to ensure they’re on target.”
Those teams are involved with various reviews, underway on both closed and live files. They will provide valuable feedback, says Bukowski-Kruszyna, particularly if they feel the tools are being underutilised in any way and people are missing a trick. “That helps us to update our guidance for staff and engage with them more meaningfully.”
Bukowski-Kruszyna recommends taking a proactive approach on seeking users’ feedback. “If you sit back and wait for it, nine times out of 10, people will say the tech’s broken,” he says. “As anyone in IT support will tell you, when all’s well, you’re ignored, but when something goes wrong, it’s all your fault. So there’s a need to get out there and engage with people, and to be encouraging and supportive about it, too. That will prevent people from slipping back into the comfort of old habits, particularly at busy times.”