With the rise of artificial intelligence (AI) rapidly revolutionising how we do business and live our everyday lives, chief internal auditors are becoming increasingly concerned that cyber-criminals are likely to weaponise the technology to commit bigger, more sophisticated and more dangerous crimes.
According to a new report by the Chartered Institute of Internal Auditors, almost four in five (78%) of chief internal auditors believe AI will negatively impact cyber security and data security, while 58% say it will exacerbate fraud.
The survey of 985 chief audit executives across 17 countries reflects Europe-wide views of internal audit and risk management experts. It also emphasises growing concern about the risks associated with AI among business leaders.
The top five risks most negatively impacted by AI for business leaders to consider include cyber security and data security (78%); fraud, bribery and the criminal exploitation of disruption (58%); digital disruption, new technology and AI (55%); human capital, diversity, talent management and retention (48%) and; communications, reputation and stakeholder relationships (41%).
“AI is becoming the new frontier in the war against cyber criminals,” says Ian Pay, ICAEW’s Head of Data Analytics and Tech. “It is reassuring that AI has been identified as a critical risk to cyber security, but the next steps must be to take meaningful action to address this. While ‘fighting fire with fire’ (using AI to fight AI) will play an important role, we cannot overlook the human element in cyber security.”
Organisations are being advised to deploy the same AI tools as part of their cyber defences, with the best defence against AI-powered cybercrime often being AI-powered cyber security solutions. For example, some AI cyber security tools can detect ransomware in seconds.
While system-led phishing detection needs to continuously improve, staff also need to be trained to spot the less obvious signs of phishing emails, such as email aliases and unusual URLs. Deepfake audio and video is increasingly realistic and can trick members of organisations into transferring vast sums of money. This was seen in the recent scam attack on engineering company Arup in Hong Kong. “Cyber-enabled fraud is prevalent, necessitating regular review of controls and risk assessments using recent cases like Arup,” says ICAEW Interim Director, Trust and Ethics Gareth Brett.
From a controls perspective, giving staff the tools to identify potential scams will be important, says Pay. “This would give all staff the confidence to challenge and question unexpected finance or data-related requests from leadership to be able to confirm their validity.”
There are also risks associated with the use of AI within organisations without appropriate controls. In the first instance, companies should have a clear AI usage policy that should prohibit the use of any internal and confidential data with public AI tools, and a strategy for the adoption of AI across the organisation. Pay cautions against an outright ban of AI tools, however: “Given the appetite for the technology, employees will find a way to use it, potentially circumventing controls in the process.”
“A high percentage of employees report using AI without explicit consent,” adds Brett. “As AI becomes more widespread, multimodal and mobile, preventing its use may become impossible. The best strategy is to establish clear, accessible usage policies with defined parameters, as Ian suggests. A better approach is to encourage staff to share how they use it and use that to inform any policy.”
Encouraging staff to be transparent in their use of AI while providing appropriate challenge and scepticism is an important part of developing an understanding of how individuals and organisations can look to utilise AI technologies effectively. It will help to build trust in AI as the technology continues to develop rapidly.
“Historically, criminals have been early adopters of new technologies, from limited companies to cars, the internet, cryptocurrencies and now AI,” says Brett. “There’s often a significant gap (up to two years) before firms catch up, emphasising the need for faster AI adoption and training to counter threats.”
Anne Kiem OBE, Chief Executive of the Chartered Institute of Internal Auditors, says: “AI is evolving rapidly and, as with all new technologies, it can be used for positive and negative reasons. Our research has shown that chief internal auditors are alert to the threats and this should bring some comfort to those organisations that have a strong focus on risk control, risk mitigation and having a well-resourced internal audit function. Internal auditors remain a force for good.”
The organisation’s survey included views from chief internal auditors in Albania, Armenia, Austria, Belgium, Bulgaria, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, The Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland and the UK. The breadth of this research ensures a comprehensive understanding of the AI-related risks impacting businesses across Europe.