ICAEW.com works better with JavaScript enabled.
Exclusive content
Access to our exclusive resources is for specific groups of students, users and subscribers.

Some of the most commonly occurring risks are outlined below with suggestions of possible mitigating steps that could be taken to manage those risks.

Risk mitigation plan

The principles outlined below are applicable to both advisors of M&A transactions and investors. Advisors would need to take extra precautionary measures in line with their professional duties and to manage risks associated with providing advice to clients. 

Risk: AI regulatory laws

Contravening AI regulatory laws could result in costly fines. This risk applies to both buy and build options. The risk component is identified as laws and regulation.

Reasons it is a risk

Industries and regions around the world have varying standards and laws regarding AI, privacy and fairness.

Refer to Ethical and legal considerations in this AI hub for more information.

Illustrative mitigating steps

Be aware of applicable AI laws and regulations based on where, how, and in what sector the model will be deployed.

In the case of advisors, draft firm policies around AI and client data use consistent with regulations, client standards and the firm’s corporate culture. Monitor compliance with the policies. 

Ensure service contracts and insurance guarantees explicitly address AI risks. 

Risk: Transparency

The lack of transparency could lead to errors. This risk applies to both buy and build options. Data, algorithm and quality risks are the main risk components. 

Reason it is at risk

Frequently, there is a lack of transparency for users around how the tools were developed (in terms of the algorithms or the data being used).

AI tools (by their very nature) can learn to arrive at their own conclusions without human intervention. It may be difficult to explain how the tool arrived at a particular outcome and whether that outcome is incomplete or inaccurate. Without thorough human review this could lead to errors not being detected.

Gen AI tools, in particular, enhance the risk of lack of transparency (“black box”), in that some models cannot explain why they produce a particular result or suggest a particular decision.

Illustrative mitigating steps

Keep a human in the loop to review and validate the AI system’s output and ask the tool why and how it arrived at a particular result and what data was utilised. Users can then assess the "quality", completeness, validity and accuracy of the output and decide on how it is used. 

For example, language models like Microsoft Co-pilot and ChatGPT can provide source references to its findings for specific research questions which makes the accuracy testing process quicker and it helps highlight any bias in its answers.

Draft AI policy documentation, for consumers and regulators, about why and how a model and its data are being used. This can provide transparency to the tool by providing a proxy for aspects of the tool and its contents without exposing the underlying data or the tool’s features (i.e. the firm’s intellectual property).  

Such documentation could include a high-level overview of the tool itself, such as its intended purpose, the known trade-offs and risk mitigation measures, as well as information about the training data set and training process. 

Use independent standards or experts to evaluate the AI tool’s fairness, performance, and transparency to establish compliance with company and legal policies. 

This World Economic Forum article, written by McKinsey, provides well explained guidance on drafting documentation to increase transparency whilst protecting intellectual property. 

Risk: Inaccurate or biased conclusions

Drawing inaccurate or biased conclusions could result in an ethical dilemma, reputational damage as well as legal liability and costly fines. This risk applies to both buy and build options and the specific risk component are data risks, algorithm risks, tool training risks and quality risks. 

Reason it is a risk

If :

  1. You are considering a deep learning model, be aware that it may develop unstated objectives if it’s provided with ambiguous instructions; or
  2. The data on which it is trained on is of poor quality (such as inaccurate, irrelevant, unethical, biased, or deepfake) or is confidential, or even illegal; or
  3. The AI tools haven’t been set up to accurately identify and extract relevant information, while filtering out irrelevant information, then it could result in irrelevant data skewing the analysis; or
  4. The data which it should be applied to is not easily accessible resulting in missing valuable data;

it may lead to inaccurate conclusions being drawn and poor decision-making. These risks can also lead to Gen AI tools producing inconsistent outputs based on the same inputs.

If a third-party tool is being used, there is an additional risk that you are unable to maintain control over the data the model is trained on.

Biased data: if the algorithms are trained on biased data or biased weighting applied to the data (skewed towards or against any particular group, whether in terms of race, demographics, or otherwise), the algorithm will also learn to be biased. Consequently, companies may make decisions that are not in their best interests or are interpreted as unfair, unethical or discriminatory. For example, an AI algorithm that is trained on data from the past may be biased towards making decisions that have worked in the past, even if those decisions are no longer the best course of action.

Determining who is accountable and liable when inaccurate conclusions are drawn based on the AI tool’s output is difficult and could be a costly process, particularly as there are no clear UK laws in place yet (as of July 2024).

Illustrative mitigating steps

1. Draft clear policies around the use of AI, its stated objectives and use of client data that are consistent with regulations, client standards and the firm’s corporate culture. Client standards should cover interpretability (do you know and understand the inner workings of the model) and explainability (can I explain why it made a decision) of the model and what risks the use case might have due to poor interpretability or explainability.

For algorithms that could have big consequences if incorrectly set up, make sure you have more than one person set up the instructions and test their logic with a third person to ensure they are clear. If risk averse, consider using simpler AI tools instead of deep learning models such as Generative AI.

Involving a corporate finance working group to select the tools and assess outputs is vital to conclude on the accuracy of the tools workings.  

2. Ensuring the AI tool’s objectives are clearly defined will reduce the risk of the tool misinterpreting the algorithm.

3. Regularly and carefully review the data used to train the tool with professional scepticism. This is particularly vital during the development and deployment phase of AI implementation, however these data tests should continue on an ongoing basis as new data is fed into the AI tool. Having a two-step authentication process is one of the ways to make sure that the data is processed and the model is trained correctly. 

4. To manage data quality and before making any decisions, legal and other M&A experts need to comprehend how an AI system has reached its conclusions or predictions to guarantee that the insights produced are valid, reliable and compliant with applicable laws. The tool should be able to answer queries on why and how it arrived at a particular result, and what data was used.

Teach the AI that certain data is incorrect and should be discarded when deriving conclusions on past trends such as "one-offs’" in the reported financial statements.

To offset the risk of an AI tool inaccurately evaluating diligence documents, advisory firms should find ways to manage related potential liability between themselves and their third-party AI providers (if involved).

To prevent the AI tool missing valuable data in their analysis, maintain a detailed inventory of all data sources with a comprehensive cataloging system outlining data type and location so it can be easily spotted if a data source is missing. Audit data accessibility to ensure that authorised users have appropriate access to necessary data.

To reduce the risk of bias in the data, ensure the input data is representative of the population the output will be making decisions about.  You could do this by building an inclusive community of experts to ensure diversification of learning and perspectives for the AI to learn from. See an example of how this was done to combat modern slavery. It’s important to note that more training data does not necessarily mean less bias – larger volumes can merely reinforce existing biases therefore it’s important that a diversity of perspectives is involved.

 

Risk: Privacy laws

Contravening data privacy laws could lead to costly fines and reputational damage. This risk applies to both buy and build options. Specific risks are tool selection risks, data risks as well as risks around laws and regulation.

Reason it is a risk

Privacy laws mandate how companies may use data and contravening these laws can lead to costly liabilities.

Even if the data was technically lawful, AI may enhance privacy concerns for advisory clients through unintended use of client-sensitive information when training the AI tool, which could lead to generating potentially sensitive outputs and violating client trust.

If you use third party AI tools, another privacy risk is having enough control over how the data you input into the model will be used and how access to it will be controlled. 

Illustrative mitigating steps

Be aware of applicable data privacy laws and regulations based on where, how, and in what sector the tool will be deployed.

Draft policies around the use of client data in the AI tools that are consistent with regulations, client expectations and the firm’s corporate culture. Monitor compliance with the policies and be transparent with clients on the use of their data. 

Ensure service contracts with third party AI providers explicitly address data privacy risks.

Risk: Intellectual property

Contravening intellectual property laws could lead to costly fines and reputational damage. This risk applies to both buy and build options. Specific risks occur to data and laws and regulation.

Reason it is a risk

The AI models may use data that is subject to intellectual property (IP) protection without consent, credit, or compensation, which could breach IP laws. 

Illustrative mitigating steps

This risk can be mitigated by using internally generated and owned data to train models. Although this could introduce bias and limit the learning and usefulness of the model.

Risk: Cyber attacks

Cyber-attacks could result in the access and leaking of confidential data; the malicious inclusion of ‘bad’ data into a training set affecting the model’s output. This risk applies to both buy and build options. The risk component are tools, laws and regulation and data. 

Reason it is a risk

New AI tools may be subject to enhanced security vulnerabilities and manipulation due to their complexity and the volume of the data held by the tools. 

Illustrative mitigating steps

Involve cyber security experts when developing and deploying new AI tools.  

Third-party risks

This risk applies to both buy and build options although the extent of the risk depends on the extent of reliance on third parties. Specific risk components include tool selection risks, data, algorithm, tool training and infrastructure risks as well as laws and regulation risks. 

Reason it is a risk

Designing, developing and deploying an AI model often involves third parties, whether that is outsourcing data collection or outsourcing the model itself. 

Illustrative mitigating steps

Know the risk-mitigation and governance standards applied by each third party, and independently evaluate and audit third party high-stakes inputs into the model. 

Risk: Infrastructure

It's a risk if infrastructure is not fit for purpose, with risk applying to both buy and build options.

Reason it is a risk

Existing IT infrastructure may not scalable or sophisticated enough to support AI tools. 

Illustrative mitigating steps

Review infrastructure needs and enhance systems where applicable. Consider third-party AI supplier systems that integrate with own legacy systems. 

Risk: Talent shortages

There are two schools of thought when it comes to the impact AI will have on staff. As part of identifying risks, the one school of thought is potential job losses. Job losses could result in talent shortages, loss of foundational skills and reputational damage. This risk applies to both buy and build options. Staff is the main risk component. 

Reason it is a risk

As fewer people may be needed to do the tasks that can be automated by AI tools, this could result in job losses and potentially a shortage of relevant talent in the firm.

Lord Clement-Jones (co-founded and co-chaired the All-Party Parliamentary Group on AI) said ‘While AI is being "trained", junior associates are not: “What’s happening is that the opportunities for training and being on the spot with your senior people are being restricted. As AI crunches the data, the associate doesn’t quite get that leg up while they are learning, because a lot of those intermediate steps in professional services may well be undertaken by AI.”

Instead of focusing on data collection and analysis, advisors will need to refine their skills in areas like data interpretation, critical thinking, and relationship skills. 

Illustrative mitigating steps

Upskill teams to interpret the data and conclusions and train them on how to supervise, monitor and report on the AI processes that will be put in place.

Upskill junior staff to be less focused on analysis and more on strategic job functions, such as forming and strengthening relationships, forming of insightful conclusions, risk management, ESG and sustainability, and governance. Develop and enable them to think critically and creatively.  

Risk: Staff workload

There's a risk to staff with increasing workloads being put onto staff. This risk applies to both buy and build options.

Reason it is a risk

A global study has shown a disconnect between the high expectations of managers and the actual experiences of employees using AI, who say implementing new AI tools has in fact increased their workload and stress.

Illustrative mitigating steps

This Forbes article, highlighting the global study, suggests hiring freelancers with the required AI skills to support the staff base and train them up during the initial stages of AI deployment. Some firms may find that hiring these critical skills on a full-time basis is necessary to support the core team of corporate financiers.

Train staff at all levels of an organisation on the time it takes to implement Gen AI and the tools’ capabilities.

Risk: ESG goals

AI adoption could have a negative impact on achieving ESG goals. This risk applies to both buy and build options. Specific risk are to infrastructure and around laws and regulation. 

Reason it is a risk

Training and deploying AI models at scale may increase carbon emissions as AI tools consume more electricity and this could negatively affect the environment. Read more about the computing power needed “AI is poised to drive 160% increase in data center power demand”.

Similarly, the data centers needed to develop and house computationally intensive models also require enormous quantities of water. Research from the University of California projects global annual water use due to AI demand as 4.2 – 6.6 billion cubic meters of water withdrawal in 2027 (equivalent to half of the UK’s total water usage).

Given possible job losses as described in the risk above, there may also be social implications to consider when implementing AI. 

Illustrative mitigating steps

Understand the climate change pledges your firm has committed to and estimate the carbon footprint and other ESG impacts of deploying new AI models on these pledges. Be transparent about the possible impact of AI deployment and consider means of offsetting the impact with other energy efficient consumption.

Evaluate the reputational damage from the societal impact of deploying AI tools.

If you are relying on third parties for any of the parts of the AI ecosystem (such as data centres), ask the third-party providers for their ESG report around the AI tools.

Risk: Unexpected costs

Another risk is being faced with unexpected costs to train and maintain the tools. This risk applies to both buy and build options.

Reason it is a risk

Extensive use of Gen AI may require either using dedicated hardware or significantly increasing cloud workloads which could increase computing costs payable to cloud providers.

Additionally, the cost of training and maintaining the models and related systems may lead to unexpected costs. Keeping AI models up to date on the latest data requires significant computational resources and time. 

Illustrative mitigating steps

Prepare a detailed analysis of the ongoing annual costs associated with the AI tool (such as computing, storage costs and costs to maintain the tool to ensure it stays relevant and fit for purpose), considering whether the tools are purpose built or licenced (as these will have varying cost implications). Practical considerations contains further information on buy vs build.

Smaller companies or those with limited budgets could use AI and machine learning solutions developed by third parties, such as due diligence tools which can be applied to a wide range of tasks or could start by using AI-enabled virtual data rooms.

Risk: Unexpected length of time

A further risk is the unexpected length of time it can take to implement new tools and training everyone on them. While this risk applies to both buy and build options, it's more prevalent when building your own model.  

Reason it is a risk

Depending on the complexity of AI tools, the decision to do buy or build and the number of teams involved in the design and development of the tools, the implementation process may take longer than expected.  

Illustrative mitigating steps

Prepare a detailed AI planning and deployment framework with clear milestones. Assign a dedicated team to the project and manage each phase of the framework meticulously. 

Risk: Contravening client contractual terms

This risk applies to both buy and build options and it particularly affects laws and regulation.

Reason it is a risk

When AI processes results in the offshoring of data it may be in contravention of contractual terms of the client engagement letter or the target business’s non-disclosure agreement.

Illustrative mitigating steps

Thoroughly review the terms of all client contracts that your firm is a party to, to be aware of any terms related to where data is processed. 

Risk: Contravening internal IT policies

Contravening internal IT policies is an additional risk when using open source AI models. The biggest risk component is to staff. This risk applies to both buy and build options.

Reasons it is a risk

The IT policies of many organisations do not allow the uploading of potentially sensitive data to non-proprietary AI models (like OpenAI). Staff may not be aware of this policy and utilise open source AI tools without realising they are contravening the organisation’s IT security policies. 

Illustrative mitigating steps

Ongoing communication and training programmes to make staff aware of the risks involved in using open-source AI tools and what is the difference to using enterprise licensed models and how the data is protected. 

Risk: Hallucinations

"Hallucinations” describes the process of an AI tool producing inaccurate conclusions while presenting it as accurate when the tool doesn’t even realise it could be wrong. This risk applies to both buy and build options.

Reason it is a risk

Gen AI models may "hallucinate" which may produce inaccuracies, due to, for example, outdated training data or missing data. Gen AI models may struggle in fast paced environments where information is updated at pace.

Illustrative mitigating steps

Any new process involving Gen AI needs to involve human verification.

Risk: Prompt injections attack

Prompt injections attack, which is a type of cyberattack against large language models (LLMs) where hackers disguise malicious inputs as legitimate prompts, manipulating the Gen AI system into leaking sensitive data or spreading misinformation. This risk applies to both buy and build options.

Reason it is a risk

Weak cyber security could result in hackers getting access to the prompts of a Gen AI system.

Illustrative mitigating steps

Build strong cyber security processes and policies specifically to protect against prompt injection attacks. 

Managing Gen AI risks

You can read more about the risks and limitations of Gen AI tools on our dedicated ICAEW Gen AI page. It also outlines mitigating actions that can help to minimise the additional risks associated with these tools, such as using prompt engineering

You may find contracting an independent AI risk assessor (or a specialist AI internal auditor) will help you manage risks associated with deep learning models such as GenAI. The auditor could be engaged to do regular risk assessments of each phase of the AI model’s computing – the input, algorithms and output. Frequent reporting, monitoring and responding to risks could help manage the transparency and accuracy risks of the models. 

ISO/IEC 23894 – A new standard for risk management of AI can also help provide guidance on how to mitigate such risks.   

Impact on employees

Time will tell the scale of the impacts on organisations’ workforces related to implementing AI. There are various schools of thought – from the very detrimental consequences to the more positive ones like AI opening new career opportunities and improved job satisfaction as mundane tasks are ‘delegated’ to the machines.

A survey of 1,200 global CEOs found that 66% of CEOs believe the impact of AI replacing humans in the workforce will be counterbalanced by new roles and career opportunities that the technology creates. While a research survey carried out by the Upwork Research Institute found that while the vast majority of C-Suite (96%) expected productivity gains from the deployment of models, a majority of employees found that deployment of these models has created greater workloads.

As mentioned in the table of risks above, a consequence of using some AI tools could be that junior M&A staff, who would typically undertake the data crunching and analysis stages of the deal cycle may no longer have certain foundational analytical skills to fully understand the underlying analysis that resulted in a certain outcome. And while new roles may open up as a result of AI, staff will need to be appropriately trained to take advantage of these roles. 

 

AI in corporate finance

Insights and resources on how AI is being used in corporate finance.

AI hub promo image of robot hand
Disclaimer

This AI in Corporate Finance content is being provided for information purposes only. ICAEW will not be liable for any reliance you place on the information in this material. You should seek independent advice © ICAEW 2024