Host
Philippa Lamb
Guests
- Konrad Bukowski-Kruszyna, Audit Data Analytics Director, RSM UK
- Fay Bordbar, Global Digital Skills Lead, Mazars
- Oliver Nelson-Smith, Tech Policy Manager, ICAEW
Producer
Natalie Chisholm
Transcript
Philippa Lamb: Hello, this is the In Focus podcast. I’m Philippa Lamb and today we’re discussing artificial intelligence in audit. How are auditors really using generative AI? And how can firms sift marketing fact from marketing fiction, and assess the risks and benefits of using AI tools?
Konrad Bukowski-Kruszyna: All of these discussions really need to be not how can we apply AI, it’s what are our biggest problems and how can we solve those? And if we need to use generative AI to solve them, great; if we don’t need to use generative AI, also great.
Fay Bordbar: There were audit partners in the room saying, well, what if AI gets it wrong? Okay, well, what if humans get it wrong? You know, we’re not perfect.
PL: With me to discuss those questions we have Konrad Bukowski-Kruszyna. He is Audit Data Analytics Director at RSM UK. Fay Bordbar is Global Digital Skills Lead at Mazars, and Oliver Nelson-Smith is Tech Policy Manager for ICAEW. Hello, everyone.
Now, everyone is talking about AI. Today we are talking about generative AI. So I’m going to ask you, Konrad, for a quick definition of what that is? How is that different to the sort of AI tools that firms might already be using?
KB-K: AI is a very broad church and encompasses a lot of technology that a lot of people probably don’t even consider to be AI. OCR or optical character recognition technology, which has been around for many decades now, is a form of AI. And actually, it’s probably something that’s very commonly used in audit. There are a number of products out there that use it, and so a lot of auditors are probably familiar with that level of AI.
Where generative AI is different is the sort of creativity inherent within it and its ability to analyse what we call unstructured data. Unstructured data is effectively normal narrative text that you or I as humans can look at and understand, but because it’s not in a nice tabular format, traditional computer systems really struggle to make head or tail of it. I’d say that’s probably the major difference really. Traditional tools focus on structured data – tables, Excel spreadsheets, very good example of that – whereas generative AI is more focused on narrative. So contracts, leases, that sort of thing.
PL: Got you. That’s really helpful. Thank you.
Fay, have you got any other thoughts on that?
FB: The way that I think about generative AI is that it generates new text or images or something new, it generates something. So it’s not just analysing something, it’s actually generating a response – a human-like response in many cases.
PL: It’s early days, isn’t it? How are you seeing generative AI being used in audit right now?
FB: It’s a great question. And I think it’s a question that everybody’s asking everywhere. What are people actually doing with AI and generative AI? I think – I’m not sure if you’ll agree, Konrad – but I think a lot of firms are still at the testing phase, we’re still trying to work out what infrastructure we’ve got and what we need to support the AI that is best going to help what we do and how we serve our clients. I know that there are various models in testing and we’ve got access to a lot of on-the-market tools, such as Microsoft’s Copilot, or there’s ChatGPT, and so on. But there’s also now the opportunity to build our own AI tools. And so there’s a lot of work going on to test the ability of the tools on the market. And also to work out whether or not we should build our own tools with our own information, our own policies, and really refine those. But at the moment, as in 95% of cases, we are still just testing.
PL: And what is the route to deciding what you test, because presumably you’re having a lot of internal conversation with audit about what would be useful to them, rather than trying to impose solutions on them?
FB: I think there’s two ways that you can look at it. You can look at your audit process as a whole and think, which stages of this process can we apply AI or generative AI to improve it? The other angle you could look at it is, what problems are we facing in certain areas of the audit and how can we solve those problems? And I’m deliberately saying not how can we solve those problems with generative AI? Because often AI may not be the solution. And actually a really fun game to play is, why are we seeing these problems in that part of the audit process and is generative AI the right answer to solve that?
PL: Are those conversations quite difficult to have in the sense that I’m guessing quite a lot of audit professionals don’t know enough about generative AI to be able to have a really useful conversation with you about what it might be able to do for them?
FB: Yeah, exactly. So, stage one is training people in the basics: we call it the foundations of AI. So what is AI? What is generative AI? What can it do? What are the possibilities? And once we’ve trained people in those basics, they need to come up with the ideas themselves to think, okay, how can I use this in my audit?
We ran a two-and-a-half-hour live webinar training course from an external provider to train people in these foundations. And at the end of it we asked people, what further training do you want? And the biggest thing by far was, we want some audit use cases. And actually now we’re going back to them to say, well, that’s your job, you need to give us the use cases, because there’s no point in us generating use cases and training them on those use cases, if they’re not useful.
PL: And you’re the audit professional. So you understand what audit needs… So it’s a bit chicken and egg, isn’t it?
FB: Yes!
PL: Is this what you’re finding, Konrad?
KB-K: Yes, as Fay says, it’s not just with generative AI, it’s with a lot of tools that you look to implement and roll out. There’s certainly a lot of technical analysis that you can do, from the perspective of the technology data analytics team that I lead up. But at the end of the day Fay is absolutely right that our most used apps that we’ve developed internally, our most used tools, are those where the audit professionals have come to us and said, look, this is a real pain point, we’re spending too much time on this particular task, what can you do to help us cut down the amount of time or increase the quality of what we’re doing? It’s those conversations that, I think, need to presuppose every other discussion. All of these discussions really need to be not how can we apply AI, it’s – as you said, Fay – what are our biggest problems and how can we solve those? And if we need to use generative AI to solve them, great; if we don’t need to use generative AI, also great.
PL: So step one is establishing a really open channel between the audit team and the tech team…
KB-K: Absolutely.
PL: …which I’m kind of guessing may not have been there in the past?
KB-K: That’s a fair assumption. It’s that age-old dichotomy between one group of professionals not necessarily speaking the same language as another group of professionals. Certainly within RSM and the way that we’ve set up the team that I lead, we try to act as those middlemen. We have the audit experience, we’ve got some of the IT and technological experience, and we try to act as interpreters between the hardcore IT geeks and the hardcore audit geeks. Everyone’s a geek, as far as I’m concerned.
PL: Fay talked about trying out various models, and I can see how that would be great, but are there any issues around doing that? Regulatory issues around doing that?
KB-K: A huge number of regulatory issues. One of the things that we’ve certainly seen a lot since the initial release of ChatGPT back in November 2022, I think it was, is a lot of the publicity around that has got a lot of our clients actually asking the question of data governance and security, which they should have always been talking about and always been considering. GDPR is the obvious framework that comes to mind there. But it’s good to see that actually, with a lot of this now being in the public eye, there’s a lot more focus not just on generative AI but on all the other ways that we use data, the way that we obtain data from our clients, how do we store it, where is it stored? Brexit has caused a lot of fun with… we can’t really use EU data centres any more because that’s no longer part of the UK. And that all feeds into the discussion around generative AI, because when you are prompting one of these AI tools, you are sending that information off to where the data centre is for that tool. In all likelihood, if you’re using ChatGPT or Claude by Anthropic, your information is probably going to an EU-based data centre. If you’re based in the US, it’ll be to a US-based data centre.
But then you start getting into the realms of, okay, as an auditor I have an engagement letter with a client to deliver a specific set of audit services. Inevitably, those will cover a bit of data governance and how do we store data? There’s now questions over, well, if we’re going to be using a tool like this, we need to let our clients know but also, how do we then deal with that transmission of data?
What you will inevitably see – picking up on what Fay was saying earlier around building your own AI tools – is that you’ll probably see more and more accounting firms looking to onshore a lot of that and use things like Azure OpenAI to restrict all of this data within the firm’s environment, but still using that same underlying foundational model without running the risk of a data breach or data leak.
PL: Before we get on to what’s working well, that raises another question in my head about client data and how well that’s optimised to work with these tools… Everyone around the table is sighing. Okay, what are the issues there?
KB-K: I think the giggles and laughs probably told you an awful lot.
Oliver Nelson-Smith: You think the hype around the technology isn’t getting every business industry thinking about how they can… as was previously alluded to… it’s important to start thinking about your technology stack a bit more holistically, your data governance, your controls around it, and what information you’re actually holding for how long. Even though it should be argued that most businesses should already have been doing this, it does mean that while at the moment it might be the case that client data might not be fantastic and you probably have to do some clean-up as part of it before inputting it, and you still have to have quite a lot of control over anyway, as remarked by Konrad, there is a control issue about where the data’s actually going, it’s about the comfort of the client that data is going into it. There’s also a bit of an accountability and liability side to it, which is that if the model you’re using comes up with a result that is wrong, then it’s… you will still hold the accountability for it. It’s not OpenAI or Microsoft who ends up holding any of the… won’t be holding the bag at the end of it. It’s supposedly all part of a journey. If you’re a bit of a technologist, it’s a bit exciting because it means that finally businesses are starting to think a bit more about the systems holistically instead of just buying and adding bits on, and you end up with these patchworks, cobbled together and function barely, but don’t really function ideally for some of these technologies.
PL: Well, having explored some of the many issues, shall we talk about what’s working well? What what you all seeing this do effectively?
FB: A good example is I sit in Mazars Group and we’ve got a global reinventing audit team, and one of the tools that we have built and we are testing is Smart Notes Analyser. It looks at the disclosures in the accounts and assesses whether or not the client has covered all the correct disclosures based on the financial information that’s in the rest of the report. That is something where AI can arguably do it better than humans, because it’s looking at it with the lens of the training you’ve given it. And if you’ve trained it on all the audit standards, it won’t forget certain audit standards. It will know all the audit standards you’ve given it and it can check all of those in turn, whereas a human could have a bad night’s sleep or not have caught up on some of the new disclosures that are required, and unknowingly could miss a disclosure or miss something that needed to be added or checked or whatever it might be. That’s the benefit of AI in that it doesn’t have a bad day, it doesn’t have a bad night’s sleep.
PL: It’s rigour.
FB: Yes, exactly. I was at a thinktank session about a year ago and there were audit partners in the room saying, well, what if AI gets it wrong? Okay, well, what if humans get it wrong? You know, we’re not perfect. That’s why regulations exist in the audit landscape. So in actual fact, I think AI is more accurate probably than humans in a lot of that sense. If you give it a certain set of rules to follow, it’s really good at following those rules.
PL: Other use cases?
KB-K: I think that’s a really big one. Something that we’ve certainly been looking at as well within RSM is actually looking at the quality of output that we get. For a lot of businesses, the end result of an audit will be an audit opinion that gets filed with the financial statements. But you will also have an audit findings report, which will detail the work that’s been undertaken, any particular issues identified, how they were resolved during the audit. And there’s an increasing focus, quite rightly, on the system of control within the business subject to audit. And that’s where a lot of audit committees and management boards find a lot of value from the audit process, having someone who’s completely independent coming in, and really digging through the sorts of things you’re doing and then identifying potential issues, but also, where permitted in terms of the independence rules, trying to suggest some solutions to those as well.
What we’ve started playing around with is having an AI tool – Microsoft’s Copilot, for example – review, say, a first draft of some of these findings reports. You give it a personality of, you know, you are a member of an audit committee, you’ve gone through an audit process, this is what your business does, read through this report, highlight what’s been done well – because it’s always good to know what you’ve done well – but also highlight areas for improvement and something that you might look for. A lot of the time, the comments it comes back with have actually been really valuable and really insightful. And we’ve had a number of audit managers and partners say actually, that’s a really good point. And we can then have a human come in and say right, we probably need to address X, Y and Z, the end result being a more tailored, more relevant and hopefully more engaging audit report than you might have otherwise got.
PL: How rapid is that process?
KB-K: It takes seconds. The analogy I always go back to is something that one of the guys on my team said right at the start of all of our testing of AI tools: imagine one of these AI copilots is a hyper-efficient, very keen intern or new starter in the business. They’re happy to go out, pull all this information together, they may not necessarily always get it right, so you’ve got to review it, but they will do something in no time flat and save you time from having to do it yourself. And I think that mentality, and going back to that question of, what if the AI gets it wrong, well, you should be reviewing the output, much as you’re reviewing the output of all the human members of your team as well.
PL: So this is not a substitute for human oversight?
FB: Completely. Yes. It’s not that AI is always right, but as Konrad said, it’s very good and very efficient at digesting large quantities of information quickly and summarising those. You can imagine a simple audit process, such as reviewing the board minutes for a company for the past year, is quite a boring task for a junior to do. And let’s be honest, as humans, we do skim read, and we might miss things. Whereas if you give that to an AI tool, it can digest all of that information and give you quite a comprehensive summary very quickly. And I would argue that that is done more effectively than by a human.
PL: What about tasks where we’re not talking about that sort of binary outcome – this is good, this is bad, this is whatever. What about scenarios where we’re talking about… we’re looking for suggestions, we’re looking for a degree of creativity. Presumably this plays into which model you’re using, because some presumably are more creative than others. And I don’t use creative in a pejorative sense here.
KB-K: I would argue that, generally speaking, if you are playing with the foundational model – that’s the actual LLM that underpins a lot of these chatbots…
PL: LLM?
KB-K: Sorry, large language model. Those large language models, if you have access to them, so say, Azure OpenAI, you have a lot more customisability over a lot of the aspects of that tool and how it operates. There are settings in there where you can adjust the… creativity.
PL: Yes, I was looking for a better word for that and I can’t quite think what that might be.
KB-K: We can get into temperature settings and what have you, but for most people that think, how creative is the model going to be, there’ll be some circumstances where if you are, say, trying to get guidance on a specific technical point, where you will want that creativity to be as close to zero as possible…
PL: Dial down, yes.
KB-K: …because you want the model to say either we’ve got a binary yes or no answer – or most importantly, I don’t know and here’s the human you should contact to discuss this. Or there will be some areas where you may want some more creativity. It could be that in terms of content generation, which is really the key selling point of a lot of these models, if you’re going into a tender process, or if you are trying to potentially even plan out an audit for a first-time client, you may actually want to say, right, well, here’s some information about the client, what do you suggest we try to do in terms of approach? Now, as Fay says, some of what’s in there will be very good. Some of it will be complete nonsense. But it’s up to you as the professional auditor to be able to look through that and say, okay, I like this, I like that, we’ll skip over that, and I’ll use my experience and professional knowledge gained over many decades in this industry to come up with a better plan.
It’s very good at getting over what I call the blank sheet of paper syndrome. If I have to write anything, it takes me a very long time to get the first words on a piece of paper, whereas if I’m using a generative AI tool, I can quickly get it to come up with a first draft and then I start editing and I can get into the zone and the creative flow. And what you end up with is hopefully very different to that initial draft. But again, that saves me time in just getting over that initial hump of where do I begin.
PL: Getting back again to the question of choosing models, there must be a tension presumably between explainability and capability?
FB: Yes, completely. So should we be using the most accurate AI models, which are almost impossible to explain how we’ve got to the answer that we have…
PL: Which presumably makes people very nervous?
FB: Yes, exactly. How can you as a partner sign off on an audit opinion if there have been or has been work performed that was generated by an AI tool that we don’t understand? How can you sign off on something you don’t understand?
PL: A lot of trust, a lot of regulatory issues, presumably.
ON-S: It seems to be a consistent tension. Even the Information Commissioner’s office years ago published explaining AI decisions where they note a number of different models and they make a distinction between the interpretability and explainability. Interpretability means how easy is it for you to interrogate as it is because you probably can build it – it’s closer to a mathematical model that people who’ve done master’s degrees might have had to do as part of modelling, through to some of these generative networks where the interpretability is non-existent, not just because it’s for a third party but also because even the people who built it and designed it will have no way of justifying why it’s decided on that output.
And likewise, with explainability. I do think there’s an interesting side to human psychology though, which gets noted when you have these conversations, which is that human beings when asked to explain a decision, a different part of their brain does activate from where the decision was actually made. So actually, even when we’re asked to explain decisions, we end up creating counterfact… after-the-fact explanations for why we did it, as opposed to not.
PL: Because we don’t understand the process.
ON-S: But we have more confidence in humans giving us that answer, is the other thing. The machine coming up with a rubbish answer is also not one that we would necessarily want.
PL: Where is the regulatory framework on this? Do we need more clarity there?
ON-S: Yes, but that’s in development in the UK. In the EU they’ve very recently formalised the EU AI Act, which will start coming into force, whereas in the UK, we’re doing it… well, the government has decided to go for a sectoral approach. Individual regulators will have to determine how they will begin regulating artificial intelligence. The first round of regulators that were prioritised for this published letters – for people who are really into regulatory policy, you can go find them, things like the FCA and PRA, Ofqual, some of these other ones who put out statements about what they view as areas that they should be looking at for regulation that are high risk.
PL: But we don’t have timeframes on when any of this will actually be set in stone?
ON-S: It’s kind of loose, and with electoral… with elections coming up, it tends to be the case that policy starts to wind down a little bit in the interim. I would say also, there’s a lot of pre-existing regulation from some regulators like the FCA, and the Information Commissioner’s Office and some useful guidance and things that you can find there. I think for auditors with the FRC, it’s still going to be a bit more of a longer journey. Also, because the UK government’s approach has been very much based around wanting to promote innovation first. And again, the AI Safety Summit was more interested in how these models might be used in weaponry and medicines with much higher risk for human life use cases than the kind of things that most businesses would be concerned about, or regulators.
PL: Thinking again about safety and accuracy, it must inevitably, presumably, be less risky to use a model and train it on your own data, or does that not work?
KB-K: There’s always risk, life is filled with risk, and you have to determine whether the potential payoff of doing something is worth the risk of doing it. And that’s with absolutely everything. I would certainly say that in terms of the data governance and security aspects, it’s an easier sell to train a model on your own in-house data…
PL: There’s a lot more comfort around that for people.
KB-K: Exactly. It’s information and guidance that you have curated as a firm that’s got your review and seal of approval. So it sort of comes with more comfort. And if you are building your own AI model on this… Fay mentioned, we’ll train a model on the auditing standards, we’ll feed in the accounting standards, we can feed in any guidance and practice notes that our audit technical teams have published over the last however many years, and we’ll inevitably end up with a Mazars GPT and an RSM GPT that functionally do the same sort of thing in terms of analysing a set of financial statements and providing some potential talking points. But they will come at it from a different background and a different perspective.
FB: And this is where we have to be really careful, because this is where bias comes into play. If you’re only training it on your own data, you’re always going to inflict bias in it unintentionally. There are various ethical concerns and things to think about when training AI models. You’ve got to look at the fairness, the transparency, the explainability or the interpretability of it, and any inclusivity issues as well.
If we’ve got one firm training an audit model, on their own data, and they only work with clients of a certain size or in a certain location, you’re going to naturally train it to respond in a way that is only ever going to be relevant to those types of clients. And we don’t always record that fact when we’re training the model, to say, by the way, guys, only use this model for this type of client and this location, because this is how we’ve trained it. And I think that’s where the regulations will be interesting to see is what disclosures do we have to make about the AI models that we’re training and we’re using?
PL: Yes, that’s a really interesting point. But do you have any more security around that if you’re using external datasets? Because all datasets are flawed.
ON-S: External datasets at least… depending on whether or not… this is a very broad question, but for me it boils down to a quality question as well. Sometimes the documentation, the data that you might have on internal systems is not always the best quality. We’ve all worked in places where file trees can get a bit messy, when drafts get lost in places, when there’s things that are out of date. So again, the foundational pieces are your systems, have you created a structure and governance around what information you have and what data you will use to train it, is probably one of the most important things, considering bias and the ethics of what you might be training it to do. And the risk that it might end up having bias enter in. The bias it might have is just around your methodology, which you probably will feel comfortable with, or it’s linked to specific clients, in which case you start introducing more risk.
I think, at least with external data, in theory you might have a bit more of a sense of at least the data quality has been attested by a third party. But then you also have to trust in that third party anyway. You have to exercise, as all auditors do… there has to be a bit of a professional scepticism and judgment that has to be applied. I think we’ve alluded to this a lot.
PL: This is back to human oversight.
ON-S: Exactly. And it’s sort of…
PL: Does that look right? Does that feel right? All that?
KB-K: It’s interesting, you said both does it look right and does it feel right. Because that comes back to the point that you were making earlier around how do humans explain decision making? And generally we can’t, and we normally come back and say, oh, you know, it’s a gut feel…
PL: Based on experience, whatever it is, yes.
KB-K: Exactly. Which obviously doesn’t exist in cold, hard AI machinery land. It’s binary, it’s ones and zeros. And it’s always struck me as very interesting when talking about those sorts of aspects, where a lot of these tools – not just AI tools, but a lot of software tools – will try to highlight a level of confidence that they have in a given answer, and it’s normally expressed in a percentage term. And people are very bad at understanding statistics and looking at numbers and will sit here and say, oh, it’s only 95% confident the answer is right? It’s like, hang on a second, it’s an incredibly high degree of confidence.
And actually, if you look at a lot of the statistical methodology that underlines how we pick an audit sample, for example, that’s all based around the magic number of 95% – two standard deviations outside of the norm, you’re trying to get comfortable. 95% of the population follows an expected pattern. So I think it’s trying to highlight that it’s almost this strange view that people seem to almost assume that, well, any decision made by a human must be 100% right. That’s so far from the truth.
PL: But also the expectation that AI is going to give you 100% correct answers, because we’ve kind of talked about it in those terms, haven’t we, that this is the answer, this is where human error is stripped out, it will be the answer. And that’s just not true, is it?
KB-K: It’s impossible. And I think anyone who’s worked in anything software related will attest that there is almost never just one answer to a solution. There are hundreds of ways to tackle a problem, all leading you to the same sort of answer. But it really depends on your particular skill set, your tool set, your risk appetite for how you approach certain things, and as well the industries that you work with. The client base of RSM and Mazars broadly speaking isn’t too dissimilar, but we have different sector specialisms and industry focus groups. And so it may be that we’ll have more of a bias, say, in terms of pension scheme audits, because RSM do quite a lot of those, compared to another firm. And that sort of bias for us will be a selling point. Because we’ll say, well, look, we’ve got a lot of experience in this area, we’ve got a tool that we really rate, whereas we don’t deal with any banking clients so we would never want to apply a tool that we have developed on to that sort of environment, because we have no experience and we know full well that that’s just a recipe for disaster.
PL: Zooming out, it seems to me attitudes to this are perhaps the first, the very first step. What people can realistically expect from generative AI across the piece – I’m not talking about specific models but really what it might potentially be able to do for you. And I wonder if there’s a bit of a gold rush feel to all this, because there is so much talk about it, there is so much written about it. There’s almost a sense that it’s essential and you have to use it. But from what you were saying at the start, Fay, that’s not right, is it? It is not necessarily the answer to everything.
FB: Not necessarily. If we look at use of OCR scanning technology, that’s optical character recognition technology, in invoice processing, where firms like Mazars and RSM we tend to steer away from clients that will provide paper documents nowadays. However, there are still accountants out there that will process your paper documents and actually there are clients out there that want to pay a premium for that service.
PL: Really? They have more confidence in it.
FB: Some clients do. Some clients just never want their information to go through OCR technology, or they are happy for it to but they don’t want to do it themselves. They want to give you the paper to do that. So I think what, in my opinion – obviously none of us knows what is going to happen in the future – but I think we’ll see firms start to adopt gen AI in parts of their audit processes, but I think we will still have firms that don’t adopt much AI in the years to come, and some clients will prefer to use those accountants. Potentially. We don’t know.
PL: So they’ll remain competitive in their own niche?
FB: Yes, potentially. I think human nature is that we have a choice in, what, who… well, a choice to who do we want to audit us? Or who do we want to be our accountants? Or who do we want to work with? And I think, if I were a client personally I would look at what audit firm do I want to work with as a whole? Do I have good relationships with the firm? Are they good at communication? Are they on time with what they say they’ll deliver, and do they deliver what they say they’ll deliver? I don’t think I would be too concerned at the tools that they’re using to get to the result, because that will come out in the wash when we look at pricing. If an order is incredibly expensive, it’s probably because it’s got a lot more human touch points in it. And if I wanted it to be that way, then I would have to pay more for that type of audit.
So it’s not all doom and gloom. And it’s not all to say that if you don’t use AI in an audit now, you are not going to be competitive in years to come. I don’t think we can say that. I don’t know. But I don’t think it’s that clear-cut.
PL: Is there a danger that practices could lean too heavily on this technology because they’re not understanding them?
ON-S: While there’s a lot of talk about it, as we were talking about right at the beginning, it’s most places are testing, most people are just trying to work out how to use it. It’s not unfeasible to think that the hype does introduce that risk. But it wasn’t that long ago that it was blockchain technology, it’s not that long ago that there was another AI hype cycle when IBM’s computer beat Kasparov. And these cycles seem to happen quite cyclically. But it hasn’t resulted in that much introduction of risk through the use of the technology because once the hype, the content, all these people talking on podcasts about it…
PL: We’re adding to the problem.
ON-S: When the dust settles, everyone… you’ve done your testing, you’ve done your moments here and there… As Fay was talking about, businesses will come to their own decisions about where the technology is appropriate to use. They’ll have their own risk appetites, they’ll have their own uses. And actually what kind of firm do they want to be, is the fundamental question. How do they want it to work? There is potentially a risk, but if other hype cycles are anything to go by, I don’t think it’s really much of a risk in this profession at least.
PL: We need to wrap this up, but I do want to ask you: for firms that are just in the very, very early stages of thinking about this, what should their first steps be? Should they be trying to develop some sort of policy? If so, how do they do that? What are the first rational, low-cost steps that people should take?
KB-K: From my perspective, and something we’ve done at RSM, is we tried to adopt what I’d term a bottom-up approach. We actually went up to the business and said, AI tools are there, ChatGPT is publicly available – we can’t stop you from using it, please don’t put client information into it, play with it as you will. But we wanted to reach out to everyone across the business irrespective of rank, role, service line, to say what are your current pain points within the processes that you currently undertake, be they client-facing processes, be they back office processes – efficiency can be found everywhere? And we went through a process of collating all of that information, and then on the basis of that trying to work out, well, what is the low-hanging fruit? Where can we get the biggest bang for the buck?
We hadn’t even thought about AI at this point, it’s just these are the big problems and now let’s have discussions with those stakeholders to work out how big of a problem is this? What’s the sort of thing that we can think of to try to solve some of those solutions? And in some cases, the answer is generative AI. In some cases, the answer is robotic process automation or RPA. I always caution against that, because if you automate a rubbish process, you just end up with a very quick rubbish process. But it’s having that sort of critical thought process around – there’s 100 different solutions to the same problem. You need to have a good idea of what the potential upside of solving that problem will be. And really, how much time, money and effort you’re willing to invest in order to get to a solution that does what you, as a firm, need to do. Which for a big firm might be going hard on a generative AI model. For another firm that may be, actually there’s a simpler solution that we can implement that won’t get us 100% of the way there, but having 80% of the problems solved is still better than having none of the problems solved.
Fay, I’m sure you’ve got your own opinions on this?
FB: Yes, I think this is my favourite question. With my digital skills hat, I think starting with an AI policy or strategy is helpful. A lot of firms don’t have them yet. So, what are we going to do? We’re not going to stand still and wait for those. What can we do practically speaking? Upskill everyone in AI – it’s not difficult, there’s loads of free training courses out there at the moment, lots of e-learning. And actually lots of relatively low-cost, live-delivered courses in person and online.
PL: So get everyone to a baseline?
FB: Get everyone to a baseline. After the two-and-a-half-hour training call that we ran, we measured people’s confidence in describing what AI was to a stranger before the training course and after the two-and-a-half-hour training course. And that went up by 1.3, I believe…
PL: Is that good or bad?
FB: …to four out of five. No, it was good. It started at 2.6 out of five and it finished at 3.9 out of five in terms of how confident are you to describe what AI is to a stranger? Another question we asked was, on a scale of nought to five, nought being I’m so worried about AI taking my job and I have a lot of concerns, to five out of five being I’m super-excited about all the things that are going to happen with AI and how I can use this in my role. And before the training course, they scored it at 3.5 out of five, and after the training course it went up to four out of five.
PL: So they were positive?
FB: Exactly. So if you want your people to remain engaged and positive and excited, then I think training is key. And actually, the best thing you can do is if you take a step back and look at what are your big problems with audits? Is it that you’re not winning audit tenders? Is it that you are pricing too high, so you’re not winning because you’re being outbid? Is it that you’re losing staff, so you can’t staff audits quick enough?
My colleague Cilia Kanellopoulos runs the UK Innovation Lab, and there’s a really interesting exercise you can run with Innovation or yourselves in your business to say, why are these problems problems? And when you find out why they’re problems, ask yourself again, so why is that a problem? Let’s take an example of the fact that you are not winning enough tenders. Why? It’s because you are pricing too high. Why? It’s because you’ve got too many humans involved in the process. Why? It’s because you need to automate certain areas of the audit. Why? It’s because your data is not good enough quality. And if you keep asking yourselves why, a lot of it comes down to the fact that we don’t have the right skills to understand how to improve small things along the way, or how to recommend the bigger things that we need – how to say, hey, I need this technology, because X, Y, Z, or I need this training because X, Y, Z. Because people don’t know what they don’t know.
We can sit here and talk about AI hype and all of the doom and gloom. But actually the most exciting thing and the thing that I really hope people take away from listening to this is that it’s not difficult to upskill, it’s not difficult to recommend upskilling courses. And I think we will all be in a better place if we can improve that kind of foundational knowledge level of AI.
PL: Oliver, you’re nodding. Final word?
ON-S: It’s a blend of the two, right? You need to have people who have good knowledge and people who can actually begin knowing to how to answer the question, and I think is a broader digital skill thing anyway. It’s not just around AI but around a whole bunch of different skills that are related to the modern workforce, the 21st century. And on the other side, it’s identifying your problem and trying to get it down. The sentence I like that maybe summarises it is, how do you make it so that the right thing to do is the easiest thing to do? You want to make it so that… your systems are making it so that people are doing the right thing in the easiest, most efficient way possible. If you can get that right, then you can have as much fun with AI as you like.
PL: Everyone, thank you so much. I have learned a lot. I think our listeners will too. Thank you very much.
FB: Thank you. It’s been great to be here.
KB-K: Thank you.
PL: We’ll be back in late June to discuss how to tackle groupthink at board level and that, I think, is going to be an interesting one. You will also notice some changes in the weeks ahead as we give the podcast a bit of a makeover. Insights, where we cover the nuts and bolts of accountancy work, will soon become Accountancy Insights, and this series, Insights In Focus, which as you know discusses wider business and economic issues, will have a new name too, and that is going to be Behind the Numbers.
So different names, but we will be covering the same broad range of topics across accountancy, finance and business with guests from ICAEW and beyond.
Thanks for being with us today. Subscribe to ICAEW Insights on your favourite app and you will never miss an episode.