It isn’t hard to see why generative AI (Gen AI) has grown in popularity in recent months, it alleviates several routine and often unexciting tasks from us. This said, its capabilities are limited.
This was the consideration when deciding which AI approach could address the complex issue of modern slavery, as discussed in my previous article. The problem needed a technology approach that could understand complex patterns, context, decision intelligence, decision support, and support reasoned expectation. All of these tasks were well suited to causal AI.
Causal AI is defined by Gartner as a form of AI that “identifies and utilises cause-and-effect relationships to go beyond correlation-based predictive models and toward AI systems that can prescribe actions more effectively and act more autonomously”. Causal AI combines ontologies, knowledge graphs and large language models (LLMs) to deliver a powerful human decision-like approach at scale.
The causal AI platform, from Parabole, is built in layers:
- the principal causal model (PCM);
- rational causal model (RCM); and
- structural causal model (SCM).
The principal causal layer essentially contains the foundational subject matter’s general domain understanding: legislation, statistical data (economic conditions, vulnerabilities, etc), supply chain maps, industry-recognised forced labour frameworks, terms, definitions and known exploitation methods, among other information.
The rational causal layer contains specific use case understanding, such as industry types, recruitment methods, working conditions, investigative findings or other specific information relevant to the topic of forced labour or child labour.
The third and final structural causal layer focuses on analytics, including KPIs, data insights, trends, historical practices, or other decision-useful analysis.
The iEARTHS (Innovative Ethical AI for Remediation and Termination of Human Slavery) AI approach is unique because it heavily embeds humans, from behavioural models, survivor experiences, workplace experiences, sentiment to empirical research through to contextualising decision-making – which, rightly or wrongly, is often economically driven. Effectively, we’ve converted human traits, attributes and behaviour to 1s and 0s, allowing the technology to ‘understand’ it.
Tackling bias
The natural question that quickly follows is: “what about human biases?”. In the context of AI development, bias occurs when an algorithm produces results that embed and perpetuate the human biases that exist within a society, including both current and historical inequalities.
The first challenge, of course, is to recognise that we ourselves may have these biases which, left unchecked, are then inadvertently embedded within the AI, which perpetuates the inequalities. One of the recent examples of this phenomenon occurred with Apple Card and the allegations of gender bias when credit limits were assessed and granted.
Bias often occurs because the data used to train the AI is flawed. For instance, Company ABC is predominately staffed by males and the company wants to use AI to help it screen candidates for new roles based on a profile of its existing best performing employees.
It is easy to see how, if the organisation fails to normalise historical employee profile data to neutralise the inherent bias towards males, females will be inadvertently discriminated against during the screening algorithm.
The iEARTHS’ approach to neutralising bias is a three-step process:
1. Validate data
All training data is curated and validated with several domain experts to ensure its quality.
2. Include survivor voices
Survivor voices are a key input into a custom-developed behavioural framework to balance and counter perception via insights from real experiences across borders, industries, gender and socioeconomics, amongst other attributes.
3. Work with diverse user groups
All AI learning is supervised and validated throughout each stage of the process with different user groups and interested organisations. In the iEARTHS team’s case, bias can arise from a lack of awareness around the complexity of modern slavery due to privilege, holding values that lack universality or unconsciously holding beliefs with deep-rooted inequalities. Recognition is the starting point to preventing AI learning bias.
Managing hallucinations
The second AI risk any development needs to guard against is hallucination. AI hallucinations are defined as a phenomenon where AI (often LLMs) perceives patterns or objects that are imperceptible or nonexistent, creating outcomes that lack common sense or are plainly false. A recent public example was Meta’s LLM, which was pulled after only three days when Galactica was unable to distinguish truth from falsehood, a key requirement for scientific text generation.
Again – it’s all about data
The iEARTHS team manages hallucination risk firstly by ensuring that AI training occurs only with high quality, relevant and curated data. Language used is as free from idioms or colloquialisms as possible, meaning accessible and common language is a key element to reduce room for AI interpretation.
Building iEARTHS rail guards means, for instance, ensuring the AI’s role is clearly defined and that it cannot recommend an action that is detrimental to a human being’s wellbeing (physical, mental or economic) or lie (such as saying child labour risk in certain industries or countries is low when in fact there is a known issue).
iEARTHS adopts a structured methodology and iterative development cycle, which includes human input throughout to validate the assumptions, inputs, and outcomes at each stage of data input, learning, testing, iteration and outcome validation. This is key to the AI’s success.
Beyond the human considerations are data privacy concerns and the ethical considerations around data usage and the technology application.
These are the areas we will delve further into during the next two installments. Both of these areas are ideally suited for chartered accountants to lead on.
Read the whole series...
Supporting AI adoption
In its Manifesto, ICAEW sets out its vision for a renewed and resilient UK, including incentivising the use of AI and upskilling the workforce to do so.