ICAEW.com works better with JavaScript enabled.
In this episode of Behind the Numbers, we discuss how the use of new technologies by criminals is putting businesses at risk.

Host

Philippa Lamb

Guests

  • Ian Pay, Head of Data Analytics and Tech, ICAEW
  • Mark Turner, Chief Technology Officer, Mitigo
  • Paul Munson, EU Compliance Lead, Rippling

Transcript

Philippa Lamb: Hello, welcome to Behind the Numbers. I’m Philippa Lamb. Today, how the explosion in digital disruption and the use of new tech is putting business at risk. According to a Europe-wide report by the Chartered Institute of Internal Auditors, AI now ranks as the fourth biggest business critical risk – that is up from sixth last year, and it’s the fastest rising risk category: 40% of respondents cited it as a top five risk. So how can professionals protect themselves and their employers? And in particular, what can SMEs with limited budgets and options do to combat this threat?

Ian Pay: When you are under attack, what you don’t want to have to think about is, what are the steps I need to take? So you want that written down. You’d want to be able to just go to that and say, OK, I now know what I need to do.

Paul Munson: You’re using multi factors from devices and things like that, and passwords, but also even in the onboarding flow, to get around the sort of possibility of deep fakes.

Philippa Lamb: I’m joined by Ian Pay, Head of Data Analytics and Tech at ICAEW – he’s been investigating deepfakes; Mark Turner, who’s Chief Technology Officer at cyber security firm Mitigo; and Paul Munson, formerly of HMRC, the FCA and the Serious Fraud Office, and now EU compliance lead at workforce software specialist rippling. Hello, everyone, thanks for coming in. We are talking about cybercrime. We’re going to concentrate on AI specifically. I mean, there are multiple ways that criminals use it. Should we just try and loosely categorise them?

Paul Munson: Malware is one that jumps out at me, you know, which is where people are inadvertently downloading something that’s going to damage the computer system.

Mark Turner: So when we look at cyber security, we can look at it from a few aspects. You’ve got the corporate infrastructure and attacks on that. You’ve got the endpoint devices – so your laptops, your phones, and the attacks on that, and then you’ve got the people side of things as well – things like phishing and social engineering, trying to gain information or make a user do something, and that’s the one that I think most commercial users probably need to be most aware about. The things with the endpoints and the back-end infrastructure – that probably needs to be taken care by your IT teams.

PL: There’s a lot of jargon, isn’t there? Spear phishing? Ransomware?

Ian Pay: Spear phishing is essentially throwing as much information out there as possible. The term spear phishing, I suppose, is from the idea of fishing – literally just throwing your spear into the water as much as you can, and hoping you catch something.

Mark Turner: And if you carry on the fishing analogy, you have whaling, which is targeting the CEO or the CFO – the big players – and it’s actually really targeted, going for somebody of useful worth to you as an attacker.

PL: Any other common terms we might dig into?

PM: I was thinking more just for the approach of the session, you know, SMEs, I think that’s it. You know, it depends on the type of business you are, the vector of attack. But I think anybody faces a spear phishing one, because, as you say, it’s literally they cast the net out, or cast the line out, and they just get a bite – and they’ll get loads of bites. And the other thing I was thinking in about the whole sessions about AI, I think it’s been a laborious job done by humans in the past, like fraud. Now they can actually use tech to do some of the early stuff, and the grunt work in that, and then obviously make it more efficient when they actually go for the attacks. And they do actually put some social engineering into it as well, you know, especially how they target businesses. And what I’ve seen, I was thinking like CEO fraud, which we sort of touched on, I suppose the whaling CEO fraud, they will use a little bit of business information, maybe find somebody in the business, put some urgency into a message, pay me now do this. And there’s always urgency in these things, and that, that might be the vector of attack, I think, yeah.

IP: And of course, the thing we’ve not touched on yet is ransomware, which remains a really, really big threat, and is not the way in for the criminals, but is usually what they’re then doing once they’ve got into an organisation, is essentially locking down, stealing the data – locking it down.

Monetising the actual attacks – using all the attacks and then monetising through.

PL: It’s interesting you mentioned that, Ian, because we have an example of that. Actually we’re going to hear now from the CEO of an accountancy firm which was unfortunate enough to fall victim to a cyber attack, a ransomware attack, despite having taken very thorough precautions. And we’ve disguised the voice of our speaker to protect the identity of him and his firm, but we are very grateful to them all for sharing an experience which we think everyone will find both salutary and useful. Let’s have a listen.

Anonymous: On my way to work on the school run at sort of just before eight o’clock in the morning, I had a phone call to say that our systems were all down and nothing was working. Obviously, I’d been in the office the day before, and when I’d left at six o’clock, everything was absolutely fine, no problem. So I initially thought, OK, that’s all right; we’re in touch with our IT provider – they’ll fix it; they’ll sort it out. At the time I got to work half an hour later, I fully expected everything to be fine. It wasn’t. And increasingly our team, our employees, were getting more and more cross that they’d come to work and they couldn’t do anything. So our IT providers then came on site and started having a proper look into all of our systems. And it was a couple of hours later – so approximately quarter past, half past ten in the morning – that our IT provider tapped on my office door: “We’ve got a fairly major problem. It’s a ransomware attack.”

PL: What went through your mind at that moment?

A: Extreme fear, broadly – complete, temporary panic around, oh my goodness, what does that mean? What have they got? What have they done? Can this completely destroy us? How are we going to get over it? How are we going to solve it? So from that moment on, I worked very closely with the MD and my counterpart from our IT provider – who was excellent, I have to say, extremely supportive, and hadn’t experienced anything quite like this before in his time, either. But we got on the phone to our cyber insurers straight away, and they immediately put us in touch with lawyers specialising in this area, and also with a cyber security consultancy firm, obviously with a large amount of expertise in this area. And they immediately got to work on how we should manage things. We had numerous Teams calls with everybody on the line asking, obviously, more questions of us, but also trying to work out a plan and a way forward.

Around probably early afternoon, I sent all of our team home for the day. Some felt great about having half a day off. Some were immediately pretty perturbed by that, as to what may or may not have been happening and transpiring. So there was then a large amount of communication that had to go out to our internal team by that evening, explaining – without giving any detail – but explaining what had happened and what we were doing about it, and how we hoped to be up and running at least partially by the next day. So 24 hours, effectively, after we realised there was a problem.

PL: Did your tech support team have a sense of who they were potentially dealing with. Was it – did they feel it was an individual? Did they feel it was highly sophisticated organisation of hackers who were doing this stuff 24/7? Did you ever get any sort of insight into that?

A: I won’t name them, but it was a highly sophisticated operation based in a Russian-speaking country, an organisation that was well known to the cyber security team, who are very serious people who – you know, you have this sort of view that it’s some strange individual sat in a dark room in their underpants who’s doing it to you, whereas in reality it’s a highly sophisticated organisation that has a structure the same as any other business – the only difference being that it is criminal activity that they’re doing.

PL: At what point did you talk to clients?

A: We notified clients 24 hours after the attack, or after we became aware of the attack. By that point, we were sort of limping back towards being operational.

PL: And meanwhile, the hackers have not issued a specific cash demand. You know they want money, but you don’t know what? They’ve just said you need to get in touch with them, but you were advised not to do that?

A: We were advised not to do it because of our back-up, meaning that they hadn’t crippled us and we were able to function. And then it was just a case of trying to almost stall and buy for time to try and ascertain whether their claim was correct and that they did have some data from our systems. That was the first thing – but also then to work out exactly what that data consisted of, in order to determine how big a problem is this, and how big a problem would it be if it went out into the, you know, the dark web. That would then determine what our strategy in regard to the hackers would be. Once we realised, yes, it’s a genuine hack, they have taken stuff, but it is relatively low risk and not hugely sensitive data, then the advice that we received and which we followed was, do not engage with them at all. So we didn’t.

PL: And ultimately they did release the data, as they threatened to do, and then it was damage limitation?

A: Yes, they released a chunk of what they said they had. So we still don’t 100% know to this day whether they had everything they said they had, or whether they had a third of it, because it was only a third of it that they bothered releasing to the dark web before they moved on, I assume, to new targets. And they gave us up, as they’re not going to get anything from us. So they moved on.

PL: That must have been very disempowering, that that period of time – which sounds like it was quite long – before you could be fully sure you understood what they had or what they might have, or what they wanted and how you should respond?

A: Absolutely. That was extremely disempowering, as was the fact that it took – and bear in mind, this was with cyber security experts who do this day in, day out, and they analyse systems, etc – it took at least six weeks to ascertain how our system had been breached. And that in itself was pretty disconcerting as well, although finding out how it eventually happened and realising that it wasn’t any hole anywhere in our security was, of course, comforting to a degree. But that six weeks of not having any idea and then not knowing therefore how to put it right was extremely concerning, yes.

PL: And what’s, in a way, even more concerning is that you had a robust plan. You had thought about this, you had system defences in place. You’d carefully protected your data, you had insurance against all and your staff knew what they should and shouldn’t do. And yet, by just accessing a perfectly legitimate looking site, they infiltrated your systems and this happened. So the lesson from that, I guess, is there’s only so much you can do, isn’t it? It’s about damage limitation once it’s happened – you can put defences in place, but there doesn’t sound to have been anything you could have done that would have prevented this from happening.

A: I don’t think so particularly. I mean, there are always things that you can do and money that you can spend. But I think that hindsight is a wonderful thing, isn’t it? This is a problem that I know a lot of people in the cyber security world will say, that people don’t take it seriously enough until it’s happening to them. And that’s absolutely the case. And we were in a much better position than a lot of other businesses in that we had taken it, you know, seriously enough to get insurance, and we had pretty good systems, all in all. But the one thing that we didn’t have which would have prevented this was the endpoint protection – which, again, was something that, up till that point, you know, had been vaguely on a radar, but nobody had said: “You absolutely must have this. It’s essential.” That would be the only thing that, in hindsight, we could have had, and that would have prevented it.

PL: And in the event it was perfectly understandable human error. The person in question who opened this opportunity to the hackers didn’t do anything that anyone would think was illegitimate or foolish. Just used a website that looked perfectly fine to look something up, and that was that.

A: Which just shows you how easy it is to fall into this position, no matter what you know, what provisions you think you’ve got in place. This could have happened to anybody, anywhere, you know, in any practice or any business, really, anywhere in the world. And I said this to the employee whose PC, was infected and was hacked – that this could have been me, easily.

PL: That must have been a horrible moment for that person, when they realised?

A: Hideous, yes, at a large degree of upset. But we, you know, supported the individual 100% because they had done absolutely nothing wrong. They were, effectively, you know, a victim in this, in that they were the unfortunate one where the hackers found a gap, but it was entirely wrong place, wrong time. It could have been anybody.

PL: It’s a sobering tale, isn’t it, and presumably the sort of thing you all come across all the time?

MT: That’s fairly typical of what you would see in a corporate attack and ransomware. Personally speaking, obviously their company had put some good practice in place. They had defence in depth, which is something we obviously preach to our clients – you can’t just rely on one thing. He’s identified what the extra defense is…

PL: Yes, this was a couple of years ago, in fairness, I should say, so, yes.

MT: So yes, endpoint securities are everywhere now.

PL: But do you want to just explain what that is?

MT: The simplest explanation of endpoint security would be a kind of more advanced version of antivirus. Antivirus was five, 10, years ago your endpoint security. Everybody had antivirus on their their laptops, and it would identify when you were going to websites and click, yeah, things like that. It’s more advanced than that. It’s doing a lot more reporting. It’s doing a lot more control and implementing security policies on your endpoint on your laptop or on your phone. So they are additional tools in the armory against an attack.

The other thing that obviously saved this organisation is the back-up, and I can’t stress that enough, having robust backup procedures, so they were able to get things back online slowly, but fairly quickly. He said that things started to come back online after a short period of time, but it took weeks to get them fully operational again. If they hadn’t had back-ups, who knows where they would be, it would have been difficult for the businesses. They may have had to negotiate with the criminals then. So they’re doing some good things, but yeah, they’ve identified the chink in their armour.

IP: And it’s very easy to listen to a tale like that and almost feel helpless, and say, you know, why should I even bother? Because they did a lot of really, really good things, but still fell victim to an attack. So as an organisation, what’s the point in me even bothering having defences in place? And the whole counter argument to that is to think how much worse it could have been if they’d not got in place what they had in place.

PL: I think they were very alive to that and very deeply grateful they’d taken out the insurance as well, which apparently had been quite a contentious decision, because obviously it’s not cheap.

MT: Cyber insurance is always a contentious decision – as is a lot of insurance, isn’t it? You pay a lot of money and you never want to, actually, you know, claim on it.

PM: So I look at it more as, I’m a risk guy, a compliance guy. So I look at it, there’s two elements: there’s preventing the attacks element, which is having the right IT stuff, the right setup, back-ups that you can restore if you need to. And I think as well, the other bit around that is, it sounded like they didn’t have an incident plan. But I think businesses should have planned for these scenarios that could happen – I know they’re slightly unlikely – even if it’s just who they’ve got to phone up or what we’re going to do with staff.

PL: I think they did have consultants, and they rang them straight away. But you’re right that internally, perhaps…

PM: It’s panic. So if you game that, or do some of that in advance, that’s what smarter businesses do.

IP: And in that situation, when you know you are under attack, there’s a lot of things going through your head. So what you don’t want to have to think about is, what are the steps I need to take, so you want that written down. You’d want to be able to just go to that and say, OK, I now know what I need to do. I’m going to do this in this order, step, step, step, step, step, and then it becomes something that is slightly easier to manage, because you’re not trying to think on your feet.

MT: And test the plan as well. So putting a plan in place is great, but test it. And that can be as simple as a tabletop exercise, or it can be a simulated attack, and seeing how things – how you would respond to that, and that depends on the size of your organisation.

IP: And also, if you’re a regulated business, you may have other reporting that you’ve got to think about, and other internal things.

PM: But like you say, it’s all panic, but it’s best to have that sort of planned out so it’s just a checklist, and you know what you’ve got to do in the moment, because it’s hard in the moment to think properly and rationally.

PL: Ian, getting back to AI specifically, I know you’ve been playing with very widely available software to see how easy it is to put it to nefarious uses, haven’t you? We’re going to play in a clip that you’ve prepared for us just to show people what you’ve been up to. Should we hear it?

IP: Hi, my name’s Ian Pay. I’m Head of Data Analytics and Tech at ICAEW, and I’d like to tell you about the importance of good cyber security, except this isn’t really me. I created an AI avatar to mimic my voice and appearance. It’s pretty realistic, don’t you think?

PL: It’s really concerning because here you are in the studio, and there you are. And, you know, I’ve got headphones on. There’s not much difference in here.

IP: And honestly, you know, for listeners, just to be really clear, I never said those words. So through the process that I went through, I basically had to – and this is obviously doing it myself, with my own consent – so I had to record some audio and video of myself, about two minutes’ worth, only two minutes. I load that into a commercially available platform. It’s got the good sort of GDPR policies in place, so I had some trust that it was a safe place to do it. It spends a bit of time processing that video and audio, and then once it’s processed I can basically, pretty much on the fly, create audio, create video that looks and sounds like me, but isn’t me.

PL: You said the video is perhaps a little more prone to glitching and being a little less convincing?

IP: It’s those sort of visual cues – you know, when you’re talking to someone, the mannerisms that they have, the way that they blink, the way that they gesture, the AI tools are just a little bit more wooden, perhaps, but the audio is really, really convincing. And this is, you know, often quite a key attack vector from a cyber security perspective, to use these deep fakes to, basically – we talked about CEO attacks: basically if you have a CEO who is very active in social media, very present, out there a lot, there will be lots of recordings, videos, audio of that CEO speaking, that a criminal could get hold of and use the technology that I’ve used. You know, I used legitimate technology, a legitimate website, but the same technology is very widely available, and you can be sure that criminals are using exactly the same tech.

PL: And what might they do with it?

IP: What they’re going to basically do with it, is to impersonate someone – for example, impersonate a CEO – and then get that AI recording, or that AI version to phone someone up and have a conversation with them. And the audio side of things, genuinely, it can be done pretty much on the fly so it can be responsive, if you integrate it with your large language model technology, which is like ChatGPT and Google Gemini – those sorts of tech.

Once you’ve got those two things working together, you can basically have a conversation with someone pretty much live. The audio that you heard there, it probably end-to-end took me about 15 to 20 minutes to put together. Once I’d actually put the script into the platform, the audio was generated almost immediately. The video that came along with it took about two minutes to generate. So it’s really, really quick. So just doing audio AI, it is instantaneous, basically. And with the technology that we have now, you can pretty much take a voice prompt – so someone talking to the AI – and the AI will respond in kind.

So, you know, what criminals are doing is basically using it – so, a CEO phones up and says: “We’ve got to get this transaction sorted really quickly. Can you make this payment, or can you approve this? Or can you do that?” There’s always a time pressure. There’s always an urgency, because they want to disengage your rational brain. They want you to act based on what you’re being told to do, and not give you the chance to think and take that step back and say: “Hang on a minute, this is a bit weird.” And also, they’re trying to keep the conversation as short as possible, because the longer that you engage with it, the more likely you are to realise that it’s actually not a real person that you’re talking to.

PL: And Paul, I know you’ve talked about organisations, embedding people within organisations, presumably, if you do that in tandem, you’ve got a record of a fraudulent conversation, haven’t you?

PM: Or even, I was thinking, use it to go through your customer. You know, you could make a video, have somebody’s ID, and effectively open an account in their name, potentially. I’ve even heard anecdotally – I haven’t seen it in my own firm – but I’ve heard anecdotally of companies being onboarded where there’s fake people at the company – the video that you’re talking about, Ian – and that might work if other controls fall down. I think it’ll only get much stronger and better over time. We’re just on the cusp of this being used well.

MT: It’s very much in its infancy. And as Ian was saying, the audio side of things is a lot more advanced and a lot more usable. The video side is a lot harder. You can take video, you can take YouTube videos of people and then start to manipulate them. You see a lot of that kind of fun stuff with, you know, with sports stars and TV personalities and that, and making them say things and do kind of strange things.

IP: And that’s been around for a really long time. But the difference is, you know, when they were doing this five, 10 years ago, it would take them days to manipulate that video, but now you can do it in, you know, a matter of minutes.

PL: And this is software? I hadn’t understood that this is software that is freely available because organisations use it for doing things like voicing up corporate video content in different sectors.

MT: Some of it’s free, right? I think the one Ian used is probably a paid-for.

IP: Yes, the platform I used for the audio that you heard there – I was able to create for free, but it was only about $30 a month subscription for the kind of premium platform which would let me create much more, longer, more detailed content.

PL: So there is no high price point for getting involved in this sort of fraud then?

MT: No, there’s some open-source software where it’s actually free to download and you can play around with it. There’s some cloud-based applications as well where you actually can then obviously [use] the processing power of the cloud and organisations like Google or Amazon.

PL: So thinking about responses to this – we’ve talked a lot about threat and I’m thinking we should get onto what businesses can do to protect themselves more – do biometrics assist with combating that sort of threat?

MT: They can do. The risk with, obviously, biometrics is if you start with deep fakes, you’re starting to replicate the biometrics in a fake way. So an example would be certain banks that use passwords, and in fact they actually call it that – my voice is my password – and you have to say a phrase, and then it basically authenticates you onto the banks straight away, without having to add usernames and passwords and additional factors. So if you can replicate that, and you know what the phrases are, then obviously you then might be able to get onto someone’s telephone banking. Biometrics such as, you can unlock your phone and you can unlock your laptop using your face – they’re obviously then open to attack if you can replicate the facial biometrics.

PL: So the resolution is good enough on the fakes, is it, to get through those sort of biometric barriers?

MT: We’ve done some research a few years ago where we were trying to defeat a mobile device’s facial recognition – we won’t say which one it is. And you start with just printing a picture out of the individual, then you get the picture and you stretch it around a balloon or something like that so it’s got a bit of kind of 3D tech.

PL: Does it work in some devices?

MT: It does. And then the organisation that I used to work for even went to the lengths of 3D printing a head of the person who was whose account, okay, using. And then it starts to get very real, because you’ve got all the kind of 3D contextualness of a real person’s head. So biometrics can help. I’m sure we all use face ID and, you know, the thumbprint. We need to be a little bit cautious about that, you know, because our faces are – if that’s our password and our voice is our password, they’re very accessible to [being faked].

23:59

IP: The key there is the multifactor. So when we’re talking about authentication, you don’t want to just use a password or just use biometrics. You want to use a combination of different authentication methods to verify identity.

PM: And I think that’s happening with ‘know your customer’ and onboarding as well in my sort of world, because you’re using multifactors from devices and things like that, and passwords. But also even in the onboarding flow, to get around the sort of possibility of deep fakes… It used to be everybody said the same thing in a video, and then everybody – all the fraudsters – knew what the thing that everybody was going to say in the video. So now they make you say different things in real time, and maybe take your ID document into the photo with you, so you’ve got control of that. The other thing I think we’re getting into as well is, we’re moving into territory – they certainly are in Europe – of a digital wallet. But I suppose the risk with that is, if somebody gets hold of that, that’s an incredibly powerful tool, so you’ve got to lock that down really tight. You know, it could almost be – I formerly worked in crypto – there’s almost put it on a chain, have a private key, and then take back control of your own data, which would be a great thing for consumers. But as long as it’s locked down and nobody can get into it – because it would be incredibly powerful if that was stolen.

PL: Remote working must be making this all much easier for fraudsters? Because, I mean, it’s like the dodgy video resolution – you know, we’ve all been on those calls, haven’t we – or this whole thing you say of holding up documents or saying phrases. If you’re doing it remotely it’s a gift.

MT: There’s a fairly famous case from the last year of the deep fake used against a business in Hong Kong – it was actually a British business with a Hong Kong office – and they managed to transfer $25m or HK$200m, by duping a clerk with – they called it a deep fake – a deep fake of the CEO asking, the things that he was saying was the urgency: “We need to – we’ve got this transaction we need to do. We need to get them done immediately.” And he did. I did a bit of reading around this over the past few days, and it was a kind of deep fake. But what they’d done is they’d taken video footage of previous meetings – they managed to get hold some video footage of the CEO, the CFO and a few other people, and use that with what Ian has been doing, which is the speech, faking the speech, and then do what you were saying with the grainy footage and that. And they managed to get multiple fakes onto a video with this clerk. He was suspicious at first, but people kept joining who he recognised, and he might not know their voice in as much detail as he needs to spot the fake. But eventually they convinced him – these fakes convinced him to transfer the money, which they did. I wouldn’t say that’s the deep fake that you see in sci-fi Hollywood, the deep fake thing that, you know, maybe everybody out there thinks is happening. It’s not like, OK, I’ll just go to a computer and I’ll tell it what I want it to say and what I want it to do. It’s lot more work than that.

IP: But it’s effort versus reward, because they’ve put a lot of time into really honing those fakes and getting them working really slickly. But the reward is that they got was $25m.

PM: I was going to say, it’s a bit like window dressing, because that’s just making it more real, or even like enhancing the scam. But the scam red flags for businesses listening are still there, the urgency that’s not really there. There’s common traits, or common red flags in these things. They’re just amping it up a little bit by putting on a nice video or making something interesting, you know, that really convinces people they are talking to their colleagues, or colleagues they’re probably scared of normally. They normally want somebody who’s a bit of a scary CEO saying: “Do this now.”

PL: That’s exactly it. We could talk, we could tell horror stories all day about this, I think, but thinking about solutions, it’s that, isn’t it? Because the accountancy firm we heard from earlier, I asked, what do you do now that you weren’t doing before, because they were pretty well prepared before. And fundamentally, they’ve got endpoint protection now, but everything else is pretty much just as it was. What they are doing is, they’re monitoring every month and the key thing they’re doing is, every single person under the roof is doing training every month, and it’s signed off, and if they haven’t done it, they’re chased up. So it’s about the awareness, isn’t it? Because everything you’re saying is about: these are the tricks of the trade. You can make it more sophisticated. You can add layers of stuff to make it more convincing, but essentially, it’s about duping someone in the organisation.

IP: Your best defence is organisational culture.

PL: So is that the key message from all this, that you can look at all this stuff, you need to understand that you need the defences, but it’s really about training your people?

MT: It’s a big, important part. It’s like what I said right at the start, there’s multiple areas you need to look at. You need to look at your systems, you need to look at your endpoints, and you need to look at your people. And the bit we’re talking about here is looking at the people, so training them, giving them awareness, you know, doing some simulated attacks against them. So, one of the things that we do in our organisation is we do simulated phishing attacks. They’re safe, but we’re trying to see if people can identify the risk and not click on the links and report it as they should do.

PL: And what sort of hit rates you get on those people?

MT: So we do it with all sorts of organisations. Sometimes a really sophisticated one will get 100% hit rate in a small organisation – so maybe an organisation with 20 to 30 people; if it’s very convincing or very enticing, and the company hasn’t had much training, you get 100% hit rate, which is obviously a red flag. And then you need to start really ramping the training up, and the awareness. With maybe a less sophisticated one, you know, a couple of typos, or a domain name – the address of where it’s coming from – if that’s very suspicious, then obviously we’ll get a lower hit rate. So, yes, it can range from anything from zero to 100%.

PM: The other thing we haven’t talked about, but if you’ve got the money, is penetration testing, where you’re actually paying people who were hackers, or know how to hack systems, to try and find vectors in your systems and find ways in, I totally get, I mean, I think the fundamental point is staff and awareness, because even if you’re a one-man-band, you should just make yourself aware, you know, do some training. But you know, you’re only as strong as your weakest link. I mean, fraudsters, criminals, I think, as we said earlier, it’s a business. They go after the weakest person they can find. It’s probably somebody who can send money. There are certain roles as well. You might want to risk assess your business, because they’ll be the ones that they’ll go after. So you can do a bit of that. It’s all about the incident planning.

IP: And, you know, thinking about our members, financial control is a really, really, really high-risk target, because they do have the ability to move money around. They are potentially under pressure from senior leadership. So those sort of people in that kind of middle tier are really the ones who are possibly most at risk of any part of the organisation. It is the finance people, because you hold the purse strings.

MT: But if you can find somebody else who’s a weak link, you can use them as a stepping stone to get into an organisation, and then you can pretend to be – or you can send emails from – that person to the finance people, or you can attack the CEO.

PL: And the same thing with supply chains to target larger organisations further up it. But just looping back to culture, I’m guessing a lot of it is about not just training, but it’s about embedding the idea that you can say no and raise a flag and not feel urged, even if you think it’s the CEO, even if you think you should be doing it – because that’s quite a big step for people to take, isn’t it, junior people, and I suspect that’s exploited quite a lot, isn’t it?

MT: We use the phrase ‘stop, question, verify’. So if you’ve got any inkling that something isn’t right, stop, yes, and then go and do the verification. So, you know, call the person, get someone else to check what you’re doing is right.

IP: Because there’s two sides to the deep fake thing. There’s what you do if someone is trying to deep fake you. So someone is trying to attack you and using a deep fake on you. And the approach there was, you know, exactly what you’ve just said – to challenge it, to question it. If it’s an AI voice deep fake, ask it questions. Ask it what it had for breakfast, asking, you know…

PL: Ask it odd questions.

IP: You know, there’s nothing wrong with asking a question that is slightly off the beaten track, because it will throw the AI a little bit off its plan.

MT: Or movement – only don’t start getting people to do start-jumps in meetings!

PM: I was going to say, actually, the same with fraudsters. Add friction, ask questions, slow this down, because they want that urgency. They want you to do the wrong thing. And there isn’t that urgency, really.

MT: You talk about the fraudsters and I’ve seen you talking about the telephone fraudsters trying to do all sorts of things. We haven’t dealt with that problem. It’s still happening. We still hear news stories about people transferring money out of the bank account because some fraudsters duped them into doing that. This is a ramp-up, a more sophisticated version of that. Most people are fairly well educated, that they don’t do what somebody randomly rings up pretending to be from your bank [asking to do]. Don’t do what they say. This is taking it to the next level. This is adding realistic voices, adding realistic facial movements and things.

PL: Presumably, complacency is the enemy here, isn’t it? Because if you put system defences in place, you think you’ve trained your people, you sent them on the course, I imagine it’s about reiteration and also just threat assessment. How often should organisations be looking at their defences?

MT: They should constantly be looking at their defences. How often they should be training their users is a difficult question, because cyber security training is only one thing you get trained on. You know, I’m sure in the accountancy world, there’s lots of financial regulation training, as well as cyber security training, health and safety training – all these things. So you’re bombarding your staff with lots of training. So definitely every year, every six months, would be ideal.

PM: I think the practice exercises are good as well, like you said, where you send the emails and see if they get duped, because they can learn, and then they’re a little bit genned up and a little bit aware. That’s what you want, that awareness.

IP: But it’s also – and back to the example from earlier, where the attack that happened – don’t victimise a person who clicks or falls for this thing, isolate them and make them feel bad, because they will already feel bad if they’ve been isolated and picked on by the cyber criminal.

PL: As our anonymous guest said, obviously, the person in their organisation just looked up an accountancy term on a legitimate website that had been hacked. They’d used the website many times with no problems at all. The site had been hacked, no one could know. So that poor person did nothing wrong, and yet was the root of all their difficulties. But obviously they dealt with that person very well, but culturally, you can see how you’d be really frightened if you were junior. But if you’re working remotely, and you’re new, and you’re junior, that’s a big ask, isn’t it, if someone’s impatient, because sometimes things will need to be done in a hurry. And if you’re the one saying, “I’m not entirely sure I should be doing this”, there’s a lot of jeopardy there. And presumably, fraudsters play on that all the time.

IP: And there’s another clever attack vector that they often use, which is, if you’re new to an organisation, if you’re not familiar with the processes of the organisation, they will find that way in. So there’s an element, and I’ve sort of touched on it, of not falling victim to your deep fake. But there’s also an element of you as an individual, how you stop yourself being deep faked. And I don’t necessarily suggest that we live as hermits – we’re in a world where social media is incredibly important to our lives. I’m not saying never share anything ever on social media, but be very aware of what you’re sharing and who you’re sharing it with, because putting a lot of information out there in the public domain makes it easier for criminals to basically create a fake version of you, then use that to attack your colleagues, your family, your friends. And so you have a personal responsibility as well to really look after your own data and who you are.

PL: Do not ignore your privacy settings. I think we’re going to have to wrap it up there. It’s such an interesting subject. I know there’s so much more we could talk about, but we’ve covered a lot of ground. Thank you very much indeed. So there is a lot to know here – I think we’ve certainly established that much. We’re going to link to ICAEW’s Artificial Intelligence hub in the show notes for this episode so, listeners, you can all keep up to date with developments and see what else is out there.

Behind the Numbers will be back in late November. Before then join us for a special episode of Accountancy Insights – we will be breaking down the contents of the Autumn Budget taking place on October 30. Thanks for being with us.

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250