The latest findings in IBM’s annual Cost of a Data Breach report reveal that the average data breach cost a firm $4.45m, which is an increase of 15.3% from 2020. The cost of not investing in good cyber practices is climbing, particularly for smaller firms. Also, artificial intelligence (AI) will increasingly have a larger effect on cyber security both in terms of risk but also detection.
AI models have already made contributions to cyber security. Some AI models have been developed to analyse vast data sets in real time, which helps to detect and respond to threats with greater speed and accuracy than human beings. Some cyber security software even uses AI to balance the security of the systems with the needs of users to access various layers of that system.
In the event of a data breach, IBM research has shown that firms with limited use of AI tools had data breach costs on average 28.1% smaller than those with no AI use.
The risks
AI models are very sophisticated algorithms, but malware is also run by algorithms. Hence, recent developments in AI can also be used in malware and scams. Just as finance teams are using AI for cognitive automation, insight and engagement, so too are threat actors.
In the 2000s, botnets such as Storm were built through spam emails sent to more computers to send even more spam emails, gaining access to thousands of computers. It then evolved further to use DDoS attacks to protect itself and maintain the network’s integrity. In the 2010s, worms like WannaCry infected computers and began automating the extortion we see in ransomware today. IBM presented a proof-of-concept AI malware in 2018 called DeepLocker, which remained dormant on a video call until a specific face appeared.
According to Malwarebytes, this evolution has already happened with Bizzaro being hidden in organisations’ systems. It was seen emulating banking sites after legitimate ones had been visited, in combination with key logging. This is even before the more recent generative AI craze that has allowed such techniques to expand almost exponentially.
Recent developments in generative AI lend themselves to a proliferation of risks:
- Generative AI models would be well suited to crafting personalised messages for highly targeted phishing (spear phishing) attacks, or to turn colleagues, friends or family into unwitting money mules.
- They can also be used to tamper with data or even change themselves to hide from some detection methods or build backdoors.
- They can generate or fabricate scandals from which to profit, or spoof emails, images, voices and videos, or create privileged accounts, or generate fake websites for data collection.
- They might even be used to generate more malware, or methods to import more malware onto systems under the radar.
A press conference with the FBI earlier this year showed that law enforcement was beginning to see hackers leverage open-source generative AI to develop new, more powerful malware along with novel delivery methods such as using AI-generated websites as phishing pages. We also saw a scam earlier this year using a deep fake of Martin Lewis.
The opportunities
Businesses and practices can use AI to develop proactive threat detection systems, predict vulnerabilities, and automate incident response processes, thus reducing the burden on human security professionals. At a time when these professionals are in very high demand, making more use of technologies might be the secret to increasing assurance of systems security while improving efficiency.
Machine learning (ML) algorithms are likely to adapt to new threats faster than a human being, based on similarities between threats. They will also be better suited to monitor user behaviour and eventually to deal with smaller threats on their own. This again will free up information security teams to focus on other endeavours. For a field with limited professionals in high demand, these efficiencies will be a crucial tool in the future.
Lastly, tools that can use structured and unstructured data sets can more efficiently find correlations between information being provided by different tools simultaneously. Current cyber tools lack cohesion and accuracy to make them completely reliable.
A 2021 MIT survey found that 96% of respondents were preparing for AI attacks. At present the cyber security profession is making better use of and preparing better for AI. Providers of antivirus tools such as Norton and McAfee are already using AI for monitoring, with investigations ongoing into how tools for detecting AI-generated content could be integrated. Both Google and Microsoft have been developing such tools through Duet and Copilot respectively.
Conclusion
The arms race between threat actors and organisations will continue, with AI just being another tool in the arsenals on both sides. AI will continue to develop, and it is very likely that malware and scams will develop with it, the likelihood of AI-driven attacks only increasing over time. To prepare for this AI-driven future, businesses should invest in AI-driven security tools, train their cyber security teams in AI and ML concepts, and continuously update their AI systems to stay ahead of evolving threats.
Additional resources:
The National Cyber Security Centre (NCSC) has a number of useful resources to help in this regard, including:
- 10 Steps to Cyber Security for medium to large businesses;
- Small Business Guide: Cyber Security;
- Cyber Essentials tools including self-assessment and certification;
- Intelligent Security Tools – guidance for using AI in security systems; and
- Blogs on exercising caution when using Large Language Models (LLMs) and how to think about AI security systems.
The ICAEW’s Technology Faculty also meets the NCSC quarterly, if you have any feedback on the tools or queries relating to them, please contact Oliver Nelson-Smith.
Cyber security awareness
Each year ICAEW marks global Cyber Security Awareness month with a series of resources addressing the latest issues and how to protect your business.