ICAEW.com works better with JavaScript enabled.

Types of AI: when humans need to step in

Author: ICAEW Insights

Published: 19 Nov 2024

Accounting intelligence: to harness the opportunities offered by artificial intelligence, does there always need to be a human in the loop?

Currently, the types of artificial intelligence (AI) that individuals are most familiar with include traditional AI machine learning (ML) and generative AI (GenAI,) which has been popularised by tools such as ChatGPT. But there are different gradients of AI tools. Some are better for repetitive tasks while others can be applied more creatively.

“When evaluating different types of AI, it’s helpful to imagine a spectrum ranging from traditional to GenAI,” says Paul Teather, CEO of Cardiff-based AMPLYFI, an AI-powered market intelligence company. “Traditional AI excels at recognising patterns and labelling content, whereas GenAI can create new outputs based on its training data. These distinctions shape their respective applications and limitations.”

Traditional AI is well-suited to tasks such as organising data in spreadsheets or automatically tagging photos. Its strengths, compared with GenAI, lie in consistency, auditability and cost-effectiveness, making it ideal for repetitive tasks that require close human oversight. However, it tends to be limited in scope and flexibility, Teather says.

On the other hand, GenAI, despite its versatility, can struggle with consistency and accuracy, sometimes generating convincing but false information, known as hallucinations. Critically, the huge costs of running GenAI tools also pose challenges for large-scale use.

Amy Rushby is Co-Founder and Director of product at fintech company Carmoola, which leverages AI primarily for operational efficiency improvements and to deliver a better customer experience. She says: “While AI has significantly transformed many aspects of our business, it’s crucial to recognise that its effectiveness depends on using the right type of AI for the right task.” 

Natural language processing (NLP) powers Carmoola’s customer support chatbots and automates parts of its customer communications. It enables the company to handle routine enquiries promptly. However, Rushby says that “because language is nuanced, humans need to intervene when responses fall short of delivering the empathy and context needed in more sensitive conversations”.

Even if we don’t realise it, NLP is already part of everyday life, powering search engines and prompting chatbots for customer service with spoken commands, voice-operated GPS systems and question-answering digital assistants on smartphones such as Amazon’s Alexa and Apple’s Siri. 

NLP is a subfield of computer science and AI that uses ML to enable computers to understand and communicate with human language. It combines computational linguistics of the rule-based modelling of human language, together with statistical modelling, ML and deep learning (DL)

ML AI systems are capable of self-improvement through experience, without direct programming, and recognise speech and images, as well as forecast market trends. DL is a subset of ML and involves many layers of neural networks. It is often the technology behind voice control in consumer devices.

However efficient these tools may appear, AI is only as good as the data it is trained on. The information we give AI programmes is the only way they can learn so if inaccurate data is entered, then the tool will produce incorrect outputs. 

And while AI can do certain things more efficiently or methodically than humans, so far AI cannot yet show empathy and human intuition. Critically, AI can be biased, too. Biases stem from the training data or the algorithms themselves. 

“When using any type of AI, a key consideration is recognising where human oversight is necessary,” Rushby says. “For instance, AI models can sometimes produce unexpected or biased results if they are trained on incomplete or skewed data. That’s why we ensure there is always a person in the loop to validate outputs and make final decisions when it comes to customer interactions and critical risk assessments.”

Establishing clear boundaries of use via an AI use policy is also critical for any business contemplating the use of AI in their organisation.

Iain Simmons, Senior Commercial Lawyer at Arbor Law, says: “Not doing so opens them up to ‘shadow AI’ where employees use AI tools secretly, which can introduce significant risks, from data breaches to legal issues and regulatory non-compliance. It’s vital to develop a robust AI governance framework that ensures AI tools are legally compliant, ethical and transparent.”

AI can automate routine, data-intensive tasks with remarkable speed, but it should not be used as a substitute for humans, particularly in areas of ethics, customer trust, and complex decision-making.

Tech firm Jitterbit’s CEO Bill Conner says: “It’s important to remember that AI isn’t a one-size-fits-all solution. While it’s a powerful tool, its success hinges on the humans who operate and manage it. 

“Even with AI automating tasks, human expertise remains crucial for decision-making and integration. This human-machine partnership fosters maximum efficiency while mitigating risks associated with over-reliance on AI,” says Conner, a former adviser to Interpol and GCHQ, who worked on the project to move the UK to e-passports.

Organisations can increase their accuracy, output and productivity by combining different AI types – but using them without human oversight is a long way off. It’s vital to consider these tools as assistants rather than an overlord.

Accounting Intelligence

This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.

Computer screen with text relating to generative AI

You may also be interested in

ICAEW courses
Prompt engineering
Learn and grow your career

Our learning shop platform is designed to help you future-proof your skills and advance your career, with a range of courses available to purchase.

Find your course
Support
Generative AI Guide

Explore the possibilities of generative AI in accounting through ICAEW's guide to Gen AI, covering use cases, risks, limitations, ethics and how to get started.

Read more
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

See what's coming up A-Z of CPD courses
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250