When considering whether to use Generative AI in your organisation, it is recommended to first consider the following:
Terms and conditions
Consider your own and those of generative AI technology and service providers. It is also important to consider policies and measures around data location, consent and secure data handling (such as access controls, anonymisation techniques, and encryption), and compliance with data protection regulations.
Policy and principles
Easy accessibility (and free availability) of generative AI tools and capabilities requires guidance from organisations on what is or is not allowed, and what is expected from their people. You should define and implement a policy for the governance of generative AI (or as part of AI more widely), with some preliminary principles, guardrails or restrictions for use.
Data privacy, security and intellectual property
Different ways of accessing generative AI models and capabilities raise different concerns around proprietary data and the privacy and security of sensitive information and intellectual property. Be clear on data that will or will not be shared with third parties, where it is stored, who owns it and how it is controlled.
Transparency and explainability
Understanding what goes on inside the black box of a generative AI model is tricky, even for their creators. Deep learning is all about synapse-style relationships. Trying to explain how or why a particular response was generated is a bit like trying to get a toddler to explain why their favourite colour is yellow. Instead, review documentation and technical material from providers as they may offer insights into capabilities, training methods and how risks have been managed. Questions to consider include:
- does the generative AI software tool offer built-in explanations of how it works?
- if it’s an LLM, what are the implications of how ‘prompts’ and questions are phrased?
- what’s the confidence level in generated responses, and is this configurable?
Accountants should be transparent about their use of generative AI by labelling or acknowledging where output has been produced by generative AI.
Context, credibility and accuracy
Caution and critical thinking are needed around data sources and outputs generated by LLMs: they do not know what they do not know. Input data and model operation should be validated. Some LLMs, such as ChatGPT, publish their data sources along with the weighting placed on them and this information should be reviewed. Outputs specific to accounting, audit, tax, standards, laws, and regulations, for example, require careful consideration and verification, as they may give outputs tailored to specific jurisdictions without making it clear. It is important to think outside the box and to review, challenge and critique all generative AI outputs, as you would a member of your team or output provided by an audit client. Professional scepticism applies equally to the output from AI, as it does to any manually produced information.