ICAEW.com works better with JavaScript enabled.

What are the main ethical challenges of implementing AI?

Author: ICAEW Insights

Published: 08 Aug 2024

In the run-up to a panel talk on AI ethics at ICAEW’s Annual Conference, PwC’s Maria Axente provides an overview of the issues at stake.

The rapid growth of generative AI (Gen-AI) platforms, which are enabling millions of everyday people to use AI in creative ways, has taken the world by storm. In the coming years, these tools are likely to be used increasingly by us all.

But with that widespread use of sophisticated technology comes a number of ethical hurdles – some of which will be examined on 4 October at ICAEW’s 2024 Annual Conference. In the panel discussion ‘Navigating ethical challenges in AI implementation’, ICAEW’s President Malcolm Bacchus and Head of Tech Policy Esther Mallowah will be joined by prominent ethics expert Professor Chris Cowton and PwC Responsible AI Lead Maria Axente.

Ahead of the event, Axente spoke to Insights to sketch out what she feels are some of the biggest ethical concerns that organisations are currently facing with AI implementation.

Setting boundaries

In Axente’s assessment, the three most pressing challenges for leaders to tackle are:

1. Misuse or inappropriate use of Gen-AI

Axente accepts that how employees choose to use AI is not something that organisations will want to overly prescribe. After all, leaders will be eager for staff to be as creative with the technology as possible. However, she stresses, it is still necessary to put in place some common-sense guardrails.

“This amazing technology is now in the hands of everyone who wants to use it,” she says. “But at the moment, organisations don’t have very well-developed training programmes. In addition, they need to set up relevant safeguards and policies.”

She continues: “Before you open up Gen-AI to a workforce, policies are important for setting the boundaries of the use cases that the tech will need to answer in your employees’ everyday work and for how your organisation will use any resulting data.”

In April, Axente points out, privacy advocates the European Center for Digital Rights – which works under the brand name ‘NOYB,’ or ‘none of your business’ – filed a GDPR complaint against ChatGPT parent company OpenAI, alleging misrepresentation. The case was brought by a public figure who had repeatedly asked ChatGPT to state his birthday. Instead of saying that it didn’t know – which it didn’t – the platform had provided a series of wrong answers, which it presented each time as true. NOYB’s complaint alleged that OpenAI had refused to meet the public figure’s request to correct his details. In another case, Axente says, several employees in Samsung’s Korean arm got into trouble last year after pasting some of the company’s proprietary code into ChatGPT to hunt for a bug fix.

For Axente, the NYOB case highlights inherent limitations within Gen-AI systems, which could spawn incorrect or misleading outputs. Meanwhile, the Samsung gaffe is a classic example of platform misuse, with staff inappropriately loading the company’s intellectual property into a publicly accessible dataset.

Which leads neatly to:

2. Treatment of copyrighted material

“We know that Gen-AI platforms and large language models have been trained on a huge corpus of data,” Axente says. “But it seems that we’re coming to a tipping point where those platforms will have exhausted all the web content that’s in the public domain. So, it’s likely that the next generation of platforms will need to ingest copyrighted data.”

That encroachment on copyrighted material, she notes, has triggered a swift response from IP owners who suspect that the process is already underway. Some owners have lawsuits in progress and others have settled.

In Axente’s view, though, copyright holders are wading into uncharted legal waters. “We are nowhere near fully understanding the dynamics of how AI will impact the copyright world,” she says. “So far, all existing laws are designed to protect the output of humans. But what happens when machines play a bigger role? How are we going to discern the point where the human input ends and the machine’s begins?”

Axente says that companies in highly regulated industries are particularly concerned about the risks of unwittingly taking advantage of copyrighted material in their use of AI tools. However, she does see potential for new business models and partnerships to arise between rights owners and AI platforms, following licensing deals that OpenAI has struck in the past year or so with Associated Press and NewsCorp.

3. Hallucinations

This ties back to the origins of the NOYB case described above. “AI platforms have a natural tendency to produce outputs that are statistically possible, or look plausible – but are factually inaccurate,” Axente says.

“This is a challenge because we’re turning to AI to take on certain tasks and operate with minimal human supervision. But at this point in time, we can’t fully trust the outputs. In some cases, accuracy is as low as 45%. That’s nowhere near good enough. So, there’s a pressing need to address what’s clearly an engineering limitation.”

Ripple effects

Fittingly enough, Axente has an engineering analogy to illustrate the risks that organisations could face if they ignore those challenges.

“It’s like taking the engine of a new Ferrari and putting it in the chassis of an old Honda,” she says. “At some point, that’s going to break the car. Similarly, if you use technology that’s not appropriately considered for where you are as a business and what the ripple effects of introducing it could be, that tech is going to bring you down.”

There are no excuses for leaders to be ignorant of the risks, she points out. “In public attitude surveys conducted around the world, AI sentiment has been quite negative,” she says. “That shows how often the risks are being debated. Even AI pioneers who’ve become celebrities often talk about risks associated with Gen-AI.”

Axente is looking forward to “a bit of debate and disagreement” on the conference panel. “From what I’ve seen in previous discussions,” she says, “we will have an opportunity to progress our understanding of this topic. Importantly, all the panellists will bring unique perspectives. If you build an echo chamber from the views of only business leaders, or only data scientists, or only professional services, you will see only part of the AI phenomenon.”

You may also be interested in

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
Resources
Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.
Technology

Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.

Read more
Video
Danny Ho, CFO, SaSa, speaking on ethical decision making and the challenges of artificial intelligence
Ethical decision making

As artificial intelligence (AI) continues to become part of the fabric of our professional and personal lives, what does this mean for accountants and ethics?

Watch now
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250