ICAEW.com works better with JavaScript enabled.

Combating modern slavery: when Gen AI isn't enough

Author: ICAEW Insights

Published: 20 Jun 2024

Following his introduction to the iEARTHS’ artificial intelligence (AI) solution to combat modern slavery, David Wray discusses why causal AI was chosen over generative AI and how the team tackled bias and hallucinations.

It isn’t hard to see why generative AI (Gen AI) has grown in popularity in recent months, it alleviates several routine and often unexciting tasks from us. This said, its capabilities are limited.

This was the consideration when deciding which AI approach could address the complex issue of modern slavery, as discussed in my previous article. The problem needed a technology approach that could understand complex patterns, context, decision intelligence, decision support, and support reasoned expectation. All of these tasks were well suited to causal AI.

Causal AI is defined by Gartner as a form of AI that “identifies and utilises cause-and-effect relationships to go beyond correlation-based predictive models and toward AI systems that can prescribe actions more effectively and act more autonomously”. Causal AI combines ontologies, knowledge graphs and large language models (LLMs) to deliver a powerful human decision-like approach at scale.

The causal AI platform, from Parabole, is built in layers:

  • the principal causal model (PCM);
  • rational causal model (RCM); and
  • structural causal model (SCM).

The principal causal layer essentially contains the foundational subject matter’s general domain understanding: legislation, statistical data (economic conditions, vulnerabilities, etc), supply chain maps, industry-recognised forced labour frameworks, terms, definitions and known exploitation methods, among other information.

The rational causal layer contains specific use case understanding, such as industry types, recruitment methods, working conditions, investigative findings or other specific information relevant to the topic of forced labour or child labour.

The third and final structural causal layer focuses on analytics, including KPIs, data insights, trends, historical practices, or other decision-useful analysis.

The iEARTHS (Innovative Ethical AI for Remediation and Termination of Human Slavery) AI approach is unique because it heavily embeds humans, from behavioural models, survivor experiences, workplace experiences, sentiment to empirical research through to contextualising decision-making – which, rightly or wrongly, is often economically driven. Effectively, we’ve converted human traits, attributes and behaviour to 1s and 0s, allowing the technology to ‘understand’ it.

Tackling bias

The natural question that quickly follows is: “what about human biases?”. In the context of AI development, bias occurs when an algorithm produces results that embed and perpetuate the human biases that exist within a society, including both current and historical inequalities.

The first challenge, of course, is to recognise that we ourselves may have these biases which, left unchecked, are then inadvertently embedded within the AI, which perpetuates the inequalities. One of the recent examples of this phenomenon occurred with Apple Card and the allegations of gender bias when credit limits were assessed and granted.

Bias often occurs because the data used to train the AI is flawed. For instance, Company ABC is predominately staffed by males and the company wants to use AI to help it screen candidates for new roles based on a profile of its existing best performing employees.

It is easy to see how, if the organisation fails to normalise historical employee profile data to neutralise the inherent bias towards males, females will be inadvertently discriminated against during the screening algorithm.

The iEARTHS’ approach to neutralising bias is a three-step process:

1. Validate data

All training data is curated and validated with several domain experts to ensure its quality.

2. Include survivor voices

Survivor voices are a key input into a custom-developed behavioural framework to balance and counter perception via insights from real experiences across borders, industries, gender and socioeconomics, amongst other attributes.

3. Work with diverse user groups

All AI learning is supervised and validated throughout each stage of the process with different user groups and interested organisations. In the iEARTHS team’s case, bias can arise from a lack of awareness around the complexity of modern slavery due to privilege, holding values that lack universality or unconsciously holding beliefs with deep-rooted inequalities. Recognition is the starting point to preventing AI learning bias.

Managing hallucinations

The second AI risk any development needs to guard against is hallucination. AI hallucinations are defined as a phenomenon where AI (often LLMs) perceives patterns or objects that are imperceptible or nonexistent, creating outcomes that lack common sense or are plainly false. A recent public example was Meta’s LLM, which was pulled after only three days when Galactica was unable to distinguish truth from falsehood, a key requirement for scientific text generation.

Again – it’s all about data

The iEARTHS team manages hallucination risk firstly by ensuring that AI training occurs only with high quality, relevant and curated data. Language used is as free from idioms or colloquialisms as possible, meaning accessible and common language is a key element to reduce room for AI interpretation.

Building iEARTHS rail guards means, for instance, ensuring the AI’s role is clearly defined and that it cannot recommend an action that is detrimental to a human being’s wellbeing (physical, mental or economic) or lie (such as saying child labour risk in certain industries or countries is low when in fact there is a known issue).

iEARTHS adopts a structured methodology and iterative development cycle, which includes human input throughout to validate the assumptions, inputs, and outcomes at each stage of data input, learning, testing, iteration and outcome validation. This is key to the AI’s success.

Beyond the human considerations are data privacy concerns and the ethical considerations around data usage and the technology application.

These are the areas we will delve further into during the next two installments. Both of these areas are ideally suited for chartered accountants to lead on.

Read the whole series...

Supporting AI adoption

In its Manifesto, ICAEW sets out its vision for a renewed and resilient UK, including incentivising the use of AI and upskilling the workforce to do so.

Manifesto 2024: ICAEW's vision for a renewed and resilient UK

You may also be interested in

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
Event
Shape the future slogan banner
ICAEW annual conference

The 2024 ICAEW annual member conference focuses on technology, leadership and sustainability. Hear more on AI, ESG and leading through change.

Find out more Book now
ICAEW Community
Data visualisation on a smartphone
Data Analytics

Helping finance professionals develop the advanced data analytics and visualisation skills needed to succeed in this insight-driven era.

Find out more
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250