ICAEW.com works better with JavaScript enabled.

The role of ethics within AI to combat modern slavery

Author: ICAEW Insights

Published: 05 Jul 2024

We complete our iEARTHS miniseries journey through an ethical lens. The ethical risks and challenges are real and require thoughtful reflection to protect what really matters: vulnerable individuals.

You know from the previous articles in this miniseries that data is a core element for both training and subsequent usage of AI. The ethical issues around data are numerous, starting with public domain collection and usage. 

A current topic of discussion is the concept of ‘fair usage’, which links to copyright laws. Several companies are embroiled in class-action lawsuits around the alleged unauthorised use of licenced material (text, images, video, and voices), including Microsoft, Google and Stability.ai among others. 

The issue effectively arises from data scraping of information in the public domain to train AI tools without licensing rights nor crediting the original source material. 

Extending further into the data itself, consider this scenario: a non-governmental organisation wants to track children in a particular country or region to ensure they are in school and not pressured into child labour. On the surface, it sounds technologically easy; however, responsible AI does not end there. 

The Children’s Online Privacy Protection Rule and the European Charter of Fundamental Rights (which includes GDPR), for instance, contain specific additional provisions around the collection and usage of data related to children. Additionally within Europe, there is a prohibition on using AI for social profiling, leaving AI companies vulnerable without adequate legal due diligence and a strong ethical underpinning. 

Within iEARTHS (Innovative Ethical AI for Remediation and Termination of Human Slavery), this has been addressed by ensuring all data consumed is licensed for AI training purposes within the modern slavery solution. Additionally, a legal and ethical advisory group is in the process of being created to oversee AI development and training. It is key to ensure both before and after training, that ethical considerations around the idea ‘just because we can, doesn’t mean we should’ are rigorously applied both in law and in its intent. 

Beyond data collection and licensing during AI training, we must consider data anonymisation. This simply means ensuring data privacy by erasing or encrypting data elements that connect a person (or company) to the data stored. Several techniques can be used, including data masking, generalisation, data swapping, pseudonymisation and synthetic data, among other approaches. 

The technique used by iEARTHS is data masking, which hides data with altered values. This technique of data anonymisation cannot be reversed engineered or detected back to a person or company. 

Why is this important? Could you imagine the unintentional harm iEARTHS’ would cause if a bad actor hacked into the AI platform and extracted a list of survivors and their stories, then used that information to target them again? Given the sensitivity of some of the data used to train the AI, such as supply chain forced labour investigations, protecting the individuals’ and companies’ rights to privacy without losing the investigative insight value are essential.

Let’s extend data privacy into an operational context when a user performs a responsible business expansion assessment into the Indonesian textile industry market, for instance. 

We made the decision at project inception never to store user submitted data within the iEARTHS environment. This avoids data leakage or breach, the likes of which have damaged countless organisations. This data decision does not undermine the ability for iEARTHS to provide explainable AI output sources, a key element for trust. 

If we return to the idea of unintended consequences and consider it relative to AI algorithms, data biases (covered in article two of the miniseries) that are not eliminated, or mitigated, prior to AI learning could inadvertently result in poor or adverse outcomes once in use. 

For instance, if the full spectrum of local recruiting practices is not ingested by the AI in its learning, then iEARTHS could miss identifying malpractices that commonly exist in a South East Asian country. In this specific case, economically vulnerable individuals are required to pay their own employment recruitment fees (rather than the employer, as we commonly see in Europe or North America). 

These debt arrangements often occur with high rates of interest or through exploitative lending practices. The result is individuals trapped in debt bondage, also known as debt slavery, subjected to unbearable working conditions for prolonged periods of time, and at times indefinitely. Missing this practice would lose the opportunity to provide a remediation requirement for the user to take concrete steps to prevent such recruitment fee practices within its supply chain. 

Understanding these risks highlights why it is so important to build iEARTHS using an ethics-based approach. UNESCO made this point when it stated: “These (rapid changes) arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups.” 

Unsurprisingly policymakers around the world largely lag in establishing AI development laws (either principle-based or rules-based) that protect against these ethical issues. This leaves technology companies to apply an ethical and responsible mindset, which places the Hippocratic oath idea of “first, do no harm” at the centre of what they do. 

Until fit-for-purpose legislation catches up with technology advancements, AI companies need to establish clear and transparent values, rules, standards, and principles expected in their AI development. Why? Because 8.1 billion people will ultimately be affected by AI algorithms and an estimated 50 million people are currently trapped in modern slavery needing our help right now. 

Finance leaders are ideally placed to co-facilitate this ethical AI discussion with their legal partners by asking tough questions and holding organisational leaders to account – we have the training and the skills to do so. 

I hope the iEARTHS’ story will make you ask yourself: what will you do with your capabilities and competences to make a positive and sustainable difference? As chartered accountants, we are very well placed to be a driving force in positive, ethical uses for AI.

Read the whole series...

Supporting AI adoption

In its Manifesto, ICAEW sets out its vision for a renewed and resilient UK, including incentivising the use of AI and upskilling the workforce to do so.

Manifesto 2024: ICAEW's vision for a renewed and resilient UK

You may also be interested in

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
Event
Shape the future slogan banner
ICAEW annual conference

The 2024 ICAEW annual member conference focuses on technology, leadership and sustainability. Hear more on AI, ESG and leading through change.

Find out more Book now
ICAEW Community
Data visualisation on a smartphone
Data Analytics

Helping finance professionals develop the advanced data analytics and visualisation skills needed to succeed in this insight-driven era.

Find out more
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250