ICAEW.com works better with JavaScript enabled.
Generative AI Guide

Cyber security considerations for using generative AI

When considering the implementation of Generative AI into your organisation, it is important to consider the risks that LLMs present to your organisation’s Cyber Security.

“Rather than investing in infrastructure to train models, we expect most accountants to use tools such as ChatGPT directly or a service that relies on generative AI models for outputs. If they are, they should be well aware of the need to secure access to such an ecosystem. Careful consideration should be given to the potential risks involved. Generative AI should be treated with the same approach as any other IT system when implementing security or IT controls,” says Neil Christie, Chair ICAEW Tech Faculty board.

This guide is focused on Large Language Models (LLMs) as they are the generative AI models that accountants will most likely interact with. These models are trained on data, to generate an answer or output based on a given prompt. 

Cyber security risks

In addition to the general generative AI risks, there are specific cyber security risks that apply to LLMs:

  • LLMs may make it easier for cyber criminals to carry out successful attacks: LLMs can enable those with malicious intent to write more convincing phishing emails in the native language of their target, or help create malware increasing the likelihood of fraud, scams, economic and organised crime There is a further risk that LLMs may be used to provide criminals with knowledge beyond their current capabilities. For example, once an attacker has access to a network, they may be able to ask an LLM how to elevate their privileges in a specific environment. Whilst they may be able to do this using a search engine, using an LLM may make the process faster and more contextualised.
  • LLMs are vulnerable to data poisoning attacks: LLMs rely on data for training. If training data is compromised or intentionally manipulated, the output will be impacted and may be biased or incorrect. While there is no easy mitigation for data poisoning, maintaining a level of moderation by treating outputs from AI services and applications as ‘untrusted’ until validated can be effective in detecting biased or incorrect outputs.
  • Confidential or sensitive information could be unintentionally leaked or exposed by the model: LLMs require a large volume of data to be trained, which may include sensitive data not intended to be exposed. There is a risk that prompting may expose or leak confidential or sensitive data. Users should be aware that information shared with an LLM may be visible to its developers and may be shared with other parties or stored and used to develop future versions of the LLM. Caution should be exercised in inputting sensitive or confidential information in queries or prompts to public LLMs.
  • LLMs may be prone to prompt injection attacks: Similar to SQL injection where SQL commands are entered into an input field providing unauthorised access to view or modify a database, prompt injections use prompts engineered to make the model behave in unintended ways for example by ignoring previous instructions or performing unauthorised actions. This can be done directly by using a prompt that includes instructions to ignore and override the application’s system prompts, or indirectly by using instructions embedded into external sources, such as websites or files which provide inputs to the model. Prompt injections can be used to reprogram the LLM to perform a prohibited action such as releasing confidential information, performing restricted actions, or allowing unauthorised access to systems and the assets of a business. Prompt injections use approaches similar to social engineering to prompt models to disregard instructions or find gaps and vulnerabilities. For example, a cybercriminal may use indirect prompt injections to bypass filters leading the LLM to fail to recognise restricted content requests, allowing unauthorised restrictions to be performed and revealing sensitive information such as user credentials or system details. This risk can be managed in more sensitive use cases by using input validation rules to manage prompts input into a model.

How to mitigate the risks

It’s important to remember that good basic cyber hygiene measures such as staff education and training, access control and supply chain management can help in mitigating the cyber risks associated with generative AI. Take the following actions:

Vet the third-party providers behind the LLM – Policies should be in place to research and vet the third-party providers behind the LLM. It’s important to consider the provider’s privacy policy and security features. Engage your information security, legal, and data protection teams to help you understand where responsibilities lie, and how to verify that the vendor is mitigating these effectively.

Staff engagement and training is key – Staff should be trained in general responsible cyber security behaviours and for generative AI, they should be trained to understand: 

  • how LLMs and generative AI tools work; 
  • the risks associated with the tools and when and how they should be used, including the type of data that can be shared with the tool. This can be defined in organisational policies and communicated to staff;
  • which LLM services and tools are approved for use based on organisational objectives, staff level of training, knowledge and experience. This can be defined in a list of approved services and tools and staff should be discouraged from using unapproved LLM services or applications.

Prepare to manage incidents in advance – It is likely that over time, vulnerabilities will continue to be discovered in the LLM or generative AI tools organisations use. Organisations and users should plan and test responses to cyber-related incidents caused by the use of LLMs.

AI services and LLMs should be separate and distanced from internal systems networks – Outputs from AI services and LLMs should be validated and checked before being used as input in further internal systems to make decisions. Ideally, AI services should be isolated from running on your local network unless validated by IT.

Additional resources

Cyber security awareness

Each year ICAEW marks global Cyber Security Awareness month with a series of resources addressing the latest issues and how to protect your business.

Close up of woman's hand holding a mobile phone, with a lap top open in the background. On the phone is the image of a padlock

You may also be interested in

Elearning
Finance in a Digital World - support for ICAEW members and students on digital transformation and technology
Finance in a Digital World

ICAEW has worked with Deloitte to develop Finance in a Digital World, a suite of online learning modules to support ICAEW members and students, develop awareness and build understanding of digital technologies and their impact on finance.

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
Resources
Cyber Security Awareness month 2023
Cyber security awarness

Each year ICAEW marks Global Cyber Security Awareness month with dedicated resources to help you know what to do when a cyber attack happens.

Browse resources
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250