ICAEW.com works better with JavaScript enabled.

Risks of cognitive technologies

We explore key risk areas that have arisen from our research into cognitive technology, such as inexplicability, data protection, bias and context, as well as wider automation risks. These areas include both larger-scale strategic risks around adopting cognitive technologies and tactical considerations that may affect specific projects.

The risks do not apply uniformly. Understanding the context around each use of cognitive automation is vital to understanding its risks and the appropriate responses.

For example, a targeted advert-serving algorithm for an online platform will affect the platform’s profitability if it is poorly made and could impact the platform’s reputation, but it is unlikely to be significant in scale and the repercussions are on the business using the technology. However, a broken mortgage approval system could have a substantial impact on innocent applicants, with greater legal and regulatory consequences for the organisation as a result. Severe transgressions could lead to market-level interventions by regulators.

While services by third parties offering cognitive technology are increasingly common, the use of an externally provided solution does not eliminate the risk and responsibility, as the General Data Protection Regulation (GDPR) makes clear.

Inexplicability

Inexplicability is a particularly thorny problem for AI projects driven by machine learning. Unlike automation technologies of the past, machine learning is not necessarily about replicating human work more efficiently and quickly but can instead be about building tools to achieve the same aims in new – and potentially incomprehensible – ways. This is particularly important in the context of Article 22 of the GDPR, which restricts entities’ ability to make automated decisions about individuals unless they take suitable measures to protect the subject’s rights, freedoms and legitimate interests.

Machine learning works by generating millions of random variations of a program and discarding the least-fitting variations to steadily create a program that maximises progress towards a goal. The resulting systems are not explicit recreations of expert humans’ methods of approaching these tasks, but instead are ‘taught’ through the learning of inferences in the same way that a developing human brain makes connections between neurons as it grows and learns. This means the algorithm produced can be difficult or impossible to understand and, while its outputs might closely match those desired, it may sometimes make decisions or classifications that seem obviously wrong or just bizarre to a human observer.

For example, image classification algorithms seem to perform almost miraculously most of the time, only to fail on an apparently simple object seen at an unusual angle or with a slight difference in shape. Machine learning processes can also be overly literal in interpreting their success criteria, such as in the case of a simulated armature which was asked to maximise the efficiency of raising a LEGO brick off the floor. As success was measured by the distance from the floor to the brick’s base, the system learned to simply turn the brick upside down.

In addition to the potential for incorrect decisions, opacity in models creates further risks. Inexplicable AI is AI that cannot be learned from and used to improve processes elsewhere. Similarly, if an algorithm recommends a suboptimal or strange action, it can be difficult to identify the cause and how to remedy it in future. If an algorithmic decision is subsequently challenged – for example, if a loan applicant appeals the decision to decline them – the company may struggle to justify the decision. This also presents a challenge to GDPR compliance, as being able to explain how automated decisions about a data subject were made is required under Article 22.

Data protection

The raw material needed to create AI is not business knowledge, but data. Machine learning creates models from vast amounts of training data, which means that the collection of data has become a serious value-generating activity for many tech giants. The amplifying effect of volume on the quality of the resulting algorithm, and the difficulty in knowing which fields will ultimately be useful for the algorithm to do its job, has led to a ‘keep everything’ approach to data retention. This has been aided by plummeting storage costs in the age of cloud computing. However, this shift has also led to concerns over consumer privacy, ethical use of data and data breaches.

Regulation and litigation have pushed back against this approach and stiffer data protection rules such as the GDPR in Europe have emerged. For organisations to thrive with cognitive technology, they need to balance the collection of data against public concerns and increased regulatory pressure.

Chartered accountants will be familiar with requirements and protections around custody of client assets and subjects’ data should be handled with similar caution.

Bias

Machine learning is born from data, so any issues in that data – whether omissions from the dataset or human biases and prejudices that taint the data – naturally affect the final model.

Algorithms might not only fail to eliminate errors or bias in human-generated data, but can in fact embed and scale up these biases, leading to issues such as unequal treatment and unfairness that both reduce the effectiveness of the model and open the organisation to reputational and legal risks.

No understanding of context

Machine learning models can be developed either through a one-off training process, or through ongoing learning. In both cases, sudden large changes in real-world circumstances can leave a cognitive system ill-adapted for its new environment, leading to excess rigidity and maladaptive behaviour.

Similarly, robotic process automation follows a complex, but ultimately pre-set series of steps to replicate a business process. Unforeseen changes to the input data, changes in rules or unusual circumstances can lead to the system breaking or carrying out outlandish or unreasonable actions. These kinds of problems with automation can be seen in the 2017 case of a Pennsylvania woman accidentally sent an electricity bill for $284 billion.

At worst, poor adaptiveness can be taken advantage of by dishonest external users, who might attempt to manipulate the cognitive system to get a desired outcome, shape its future learning or expose the data it was trained on.

Cognitive technology and automation have led to increased discussion around the ethical obligations of chartered accountants and the wider business community. Ethics is an example of the complex contextual reasoning that may be missed by an automated process.

For a fuller discussion of the effects of new technologies on ethics and accountability, see our paper on the subject at www.icaew.com/ethicstechfac.

Automation risks

Automation also presents wider risks. For example, organisations need to consider how the skills they have affect the opportunity to automate – that is, whether they have and can keep hold of staff with the right skills to implement their automation projects. Conversely, automation can replace staff with valuable experience and knowledge that will not be fully replicated by the new automated process.

Automation also threatens a traditional key control against fraud: segregation of duties. A department of dozens of individuals might be replaced with a handful of specialists responsible for a new cognitive automation process. This could make it easier for a small number of people to conspire to commit fraud.

All these issues can make it difficult for cognitive technology to reach its full potential as an automation and value-generating tool.

Next, we will consider how these risks can be mitigated and controlled.

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250