ICAEW.com works better with JavaScript enabled.
In this episode, we recap revisions to ISA 600 on group audit, and we discuss the likely contents of the AI Bill that is expected to be brought to Parliament before the end of the year.

Host

Philippa Lamb

Guests

  • Catherine Hardinge, Partner, Price Bailey
  • Oliver Nelson-Smith, Tech Policy Manager, ICAEW

Producer

Natalie Chisholm

Transcript:

Philippa Lamb: Hello. Welcome to Accountancy Insights. I’m Philippa Lamb with your monthly round-up of news for the accountancy profession. Today, as the auditing standard for group audits is revised, Price Bailey partner Catherine Hardinge will be talking us through the key changes. ICAEW Tech Policy Manager Oliver Nelson-Smith is with us, too. He’ll be filling us in on what to expect from the government’s new AI Bill. Hello, both.

PL: Catherine, should we start with group audits? Because the standard that deals with them, it’s been revised, hasn’t it? Can you give us a bit of background?

Catherine Hardinge: So ISA 600 is the standard on special considerations audits of group financial statements, including the work of component auditors. It’s effective for accounting periods beginning on or after the 15th of December 2023.

PL: It’s effective for what accounting periods then?

CH: So, it’s effective for accounting periods commencing on or after the 15th of December 2023 so will affect upcoming December 2024 year-ends and potentially shorter periods as well.

PL: But it’s unlike other auditing standards, isn’t it?

CH: Yes, it doesn’t stand alone like other standards. It draws on other standards and supplements them. The FRC believes that the enhancements to ISA 600 will support group auditors’ efforts in achieving higher quality audits, especially given that the quality of group audits is persistent finding by the AQR.

PL: And these changes, the plan is also to align the standard better, isn’t it, with other auditing standards?

CH: Yes, there’s been a number of updates recently. In particular, we’ve got ISQM 1 and 2, and ISA 220 and all of those have quite a big impact on ISA 600.

PL: And its group and component auditors.

CH: Yes, it will be both that are impacted quite significantly by this. As well as large and complex multinationals, it will also impact smaller, simpler groups.

PL: So that’s the background. For auditors, what would you say are the key practical changes that they will need to put into practice now?

CH: One of the key changes is the concept of a significant component has now been removed from the standard, so there’s no longer a set of quantitative thresholds above which an audit must be performed on a component. Instead, it’s going to be a much more risk-based approach.

PL: So what’s that going to mean?

CH: It’s going to mean much more difficult judgements for the group auditor to try and work out actually the level of work to be performed on each of the components and who is going to perform them. So as part of the scoping process, the group auditor will need to consider not just factors such as competence of the component management, but also concepts such as the level of disaggregation in the group; they’ll need to consider both the number and size of each of the components and think about the different risks.

PL: So there’s a lot to think about here, isn’t there? So group audit engagements, they often include component auditors. Practical changes to think about there, too?

CH: One of the key changes is the definition of engagement teams was updated with ISA 220. These now include those who perform audit procedures on the audit engagement. This means component auditors will therefore have to be treated as part of the audit engagement team. This could also include different firms that carry out specific procedures, such as stock takes in other jurisdictions. There’s a number of practical challenges, including those, in particular things about attending team meetings and taking part in, for example, fraud discussions, as required by ISA 240.

PL: The group engagement partner, their responsibilities are more clearly emphasised now, aren’t they?

CH: Yes they are. So they need to ensure that they are confirming the component auditors understand and comply with a number of standards, including the revised ethical standard. It’s really key that the audit engagement partner is seen to be leading and directing.

PL: So this is about good communication isn’t it, between the group engagement team, the component auditor. It’s obviously important. What should that ideally look like now?

CH: I mean, communication is so key in these situations, it really needs to be a very robust two-way, communication between the group and the component auditors. Early communication is going to be absolutely vital to make sure that everybody understands what’s going to be involved and whether there’s going to be any changes to the scope of the work that’s going to need to be carried out. Challenges to overcome could include issues such as sharing audit documentation. What language is the documentation going to be in? Whether it’s going to be electronically available to people, whether there’s travel restrictions, providing guidance on compliance for the ethical standards, because there are a number of differences between the different jurisdictions, and depending on the different requirements, you may need to provide additional guidance and support.

PL: So a lot of upfront work?

CH: A lot of upfront work, and it’s going to be quite clear now that going forward, one-way group audit instructions are no longer likely to be effective in most cases.

PL: So since ISA 315, that was risk assessments, they were revised, everyone’s been thinking hard about how best to use IT, haven’t they, as part of an entity’s internal control environment. How does that interact with the revised ISA 600?

CH: Well, it’s going to be quite complex for group auditors now, because they’re going to need to understand the risks posed by the use of IT across all the different components, and think about the level of integration of the different IT systems across the group. The group auditor will need to consider the commonality of IT and manual internal controls and whether design and implementation can be done group-wide.

PL: Right.

CH: They must also consider if any activities are centralised. Are there shared service centres? And whether this constitutes a separate component of the group that they need to consider.

PL: Thinking across the piece, what would you say is the most important point for auditors to think about before the standard is actually implemented?

CH: The real key thing is to think about things early on in the process. Think about the changes both by the group and the component auditors. It’s really key to plan revisions to the audit approach, because there will need to be changes this year. Earlier and more communication between the group and component auditors is absolutely essential.

PL: Now on to the AI Bill with Oliver Nelson-Smith, and it’s coming soon, isn’t it? We’re expecting the Bill to be introduced to Parliament before the end of the year. Is that right?

Oliver Nelson-Smith: Yes, we think that’s right. It could be that it’s delayed if the government has other priorities, but that seems to be the speed at which they want to go. It’s a bit of an odd one because it wasn’t necessarily mentioned in the King’s Speech, which is usually when they do announce their sort of legislative agenda for that period. But Peter Kyle, who is the Minister for the Department for Science, Innovation and Technology, has been talking about it quite a lot, even before the most recent election.

PL: So that sounded quite promising, but this should not be confused with a separate private members’ bill that was introduced to Parliament last November, is it? That is a totally different issue.

ONS: No. I mean, that was dropped following the announcement of the last general election. It sought to set up a central authority to manage the way that AI would be regulated and legislated for within the UK. However, the previous government’s approach also just means that that is not really a tenable solution anyway.

PL: And as you say, obviously we don’t have the actual Bill, but there’s been a bit more detail, hasn’t there since the King’s Speech?

ONS: Yeah, that’s more or less right. I mean, Peter Kyle’s been making suggestions even prior to the King’s Speech about what he’d like to see within it. There’s sort of two sides to it. One is that they want to move having some voluntary agreements that were put together after the last government’s AI Summit around the sharing and testing of AI models, which governments can then do before these large tech companies can deploy them. They want to move those from being voluntary agreements into legally binding ones. So there will be a legal requirement for the models to be shared before being deployed, for governments to test for risks and vulnerabilities. The second side would be trying to give a more legal footing, a legislative footing, to the AI Safety Institute, so it actually would be recognised as a proper government body, as opposed to this sort of little thing that’s been spun out by Rishi Sunak’s government.

PL: I mean, some companies have already signed voluntary agreements, haven’t they?

ONS: Yes, the big ones already have, yeah. I mean, the big ones, like Open AI, Microsoft, Google, they’ve all signed on to those voluntary agreements.

PL: But not in all territories?

ONS: So all the territories of the countries that attended the AI Summit, which, I mean, it’s the OECD really.

PL: Okay. I mean, understandably, there has been some nervousness, hasn’t there from major players in the sector?

ONS: Peter Kyle, publicly, has been pushing them a little bit by saying that he doesn’t want to give them a “Christmas Tree Bill”.

PL: What does he mean by that?

ONS: I think it’s to say that he’s not going to give them presents under the tree?

PL: No, I don’t know. Or is it about mission creep? I didn’t really understand the phrase.

ONS: Yes, no. I mean, to me, it felt very much like he’s just trying to say that this isn’t going to be a very light touch thing, and that they should expect to actually be held to some requirement.

PL: Okay.

ONS: The thing is that that’s sort of a bit of public chest bumping, I think, because there are already, I mean, the European Union’s AI Act came into force last year, and it already has quite prescriptive requirements and risk categorisations around the use of the models. The Biden administration also signed an executive order that was mandating the creation of standards and that these companies adhere to those standards on safety, security, trustworthiness. It means that the UK is sort of loud and bragging, but also may be a bit behind.

PL: Because, I mean, there has been some argument that we’ve already got data regulation that covers a lot of the risks that the models might pose, so…

ONS: Yeah, so I mean, that was sort of the previous government’s approach as well, was that these pre-existing regulations, not just in data, but in other sectors as well, can cover because if you imagine that data is the input into the model, you have your box, and then output is the outcomes that are regulated already. Then you already have regulation for inputs. You already have regulation of outputs. Do you really need regulation for the middle, how the cheese is made? And so the ICO, for example, within the current powers of the data law, has already been looking at a lot of biometrics: use of facial recognition, emotion analysis.

PL: What is emotion analysis?

ONS: Emotion analysis is using the same thing as you would use for facial recognition algorithms, but to try and detect people’s emotional reactions to things. So there are already some HR and hiring software providers who will give you models that are meant to give a sense of whether or not the applicant might be lying or a sense of nervousness. These are quite heavily controlled within GDPR already.

PL: Yes, I mean, some obvious pitfalls there, aren’t there?

ONS: Hmm, yes, because human beings are already pretty bad at telling whether or not human beings are lying, or emotions anyway. So whether or not models would be better than us at it I think is a little bit…

PL: Yeah, and particularly in interview situations, it seems fraught with difficulty that.

ONS: There are some companies who brought them up, but have since moved back against using them, because it’s, again, it’s handing more and more decision-making outside of your own control. And again, human beings might think about what the model’s telling them critically, which we would suggest, but it’s human nature. If something tells you this person is maybe bamboozling a little bit then you would probably might well believe the machine over exercising your own judgement.

PL: Yes, it puts the question in your mind, yeah. So what does ICAEW make of this approach? What are you hoping to see in the Bill?

ONS: I mean, we’ve been calling for this since the last government as well, that we want to see more done around the regulation of technology, particularly the very powerful ones. We believe that going forward, with deep-fake technology in particular, the ability to mimic people’s voices and images, is going to become increasingly difficult, especially around financial crime, where there are a lot of money-laundering regulations, etc, but I think it’s quite difficult for businesses already operating with these things. And economic crime is the largest and most common form of crime in the UK, including scams and things like that, and this is just lowering the bar and the ease by which scammers can do it. So we do think that there needs to be a lot more. We do think that the original approach that the government took, or the Sunak government took, was pretty… it seemed prudent, because it was just trying to accommodate innovation within different sectors. So it was talking very much that, you know, your industry sector regulators can mandate and create rules, they already have the power to start regulating AI within their sectors, and they’re pushing them quite hard to think about it. But introducing this Bill is also a good idea, because there are plenty of use cases for AI that fall outside the normal regulatory perimeters that you might have.

PL: So they might cross boundaries, effectively?

ONS: Exactly. So I mean, the AI models that have caused a lot of the public hype the past year around your GPTs and your Copilot have really been the sort of general use case, they’re not designed with a specific thing in mind. So if you’re, for example, you could be using Copilot to give financial advice, which is heavily regulated, you have to be approved by the FCA in order to do it. But it’s also that Microsoft itself doesn’t hold any accountability for having developed that model.

PL: Because it’s not designed specifically for that?

ONS: Exactly, because it’s not designed specifically to do that.

PL: Okay. And the sense is the UK is a bit behind?

ONS: Like I said, compared to some of the other jurisdictions, particularly with the EU and the United States, you could argue that it is falling a little bit behind. But that being said, we would also, I mean being behind is not necessarily the worst thing, because you can learn based on the difference between the two approaches and what actually is good versus bad.

PL: Yeah.

ONS: What you want, again, you would want to encourage responsible innovation and use of the technology. Emphasis on responsible. Equally, though, I mean, because of the global nature of these technologies, we at ICAEW do want to push quite hard on the idea that there should be some, as much, international alignment as we can manage, again, just for the UK’s own ability to continue trading. We do have quite good environment, university environment sorry, to create entrepreneurs and the development of these models, like Google’s DeepMind many years ago, and AlphaGo was developed in the UK. So we need to make it so that at least the people who develop these models, if they want to sell their businesses to Google, for example, should still have the ability, and it doesn’t suddenly fall out of what the United States might find responsible, reasonable, safe, secure, or what the Europeans might find disagreeable.

PL: So ICAEW would want to see the progress that’s already been made built on, rather than discarded?

ONS: Yes, exactly. I mean, we’d also like to just see it better funded. There was some money, I think £10 million, that was put aside for upskilling regulators and AI, but we find that’s pretty insufficient, given that there are quite a lot of regulators in the UK who cover a lot of other responsibilities. Equally, there is supposed to be a coordinating body between regulators to manage where regulations of these technologies might overlap between different regulated sectors. But so far, it seems like it’s not actually been set up in a way that’s effectively doing that.

PL: And of course, accountants are already upskilling themselves in AI, they’re having to do so. Do you want to just run us through what ICAEW is already offering for them?

ONS: We love a shameless plug.

PL: Well, why not?

ONS: We, on our website, have an AI hub, an artificial intelligence hub. And the website brings together a lot of our guidance and training that we offer, as well as the latest industry press, just in one place on the website. There are four free learning modules if you’re particularly interested in it, to help you begin building your skills in this area.

PL: So if you’re an outright beginner, you can go there.

ONS: If you’re an outright beginner, you can go there. We’re soon going to be launching one on AI ethics on World Ethics Day, which I think is mid-October, October 16. For those of you who are members of the Corporate Finance Faculty, that’s a sort of add-on to your basic membership. There will be a guide that’s being published on the use of the technology within M&A deals with quite a practical focus on use cases.

PL: When’s that likely to appear?

ONS: Later this year.

PL: Okay.

ONS: We’re ironing out some of the details of it, but it will be, yeah, I think, later this year.

PL: And there’s a wider campaign coming soon, too?

ONS: Yes, that’s right. We’re sort of trying to build further on our members’ awareness and skills in AI, so we’ll be producing a lot of new content across the website and social media channels. So yeah, keep your eyes peeled, and please engage with us.

PL: That’s it for today, thanks for listening. Talking of AI, later this month on Behind the Numbers, we’ll be looking at how cyber criminals are embracing AI. What steps can you take to protect yourself and your firm? Do not miss that one. Finally, another reminder that listening to these podcasts counts towards your CPD. If you listen on the ICAEW website, you can just click the Add CPD tool each time you listen, it really is that simple. Meantime, please rate, review and share this episode and subscribe to the series wherever you like to get your podcasts. Every rating helps the series to reach as many accountants as possible, and that’s why we make them, so thank you in advance.

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250