Concerns that AI could lead to some people becoming uninsurable or, at the other extreme, result in better risk profiles and lower customer premiums highlight some of the ethical issues the insurance industry will need to grapple with as use of AI ramps up.
A round table hosted by ICAEW’s Financial Services Faculty brought together industry leaders, leading academics and regulators in September to discuss the risks and opportunities of AI use cases in financial services. Given the ethical issues at play, it is no coincidence that many of the topics on the agenda were subsequently reiterated by Financial Conduct Authority (FCA) Chief Executive Nikhil Rathi in a speech made several days later.
Better services, access and prices
Round-table participants discussed how AI-enabled hyper-personaliation in insurance quotes could lead to better services, access and prices. For example, AI could comb the social media and bank transactions of a potential customer – with their consent – to come up with a better risk profile and possibly offer a better price as a result.
The flip side to this is the risk that insurance becomes so hyper-personalised that certain demographics are discriminated against and denied insurance cover, potentially leading to swathes of the population being uninsurable. It was suggested that unhealthier people and/or those who did not have access to technology might be at increased risk of being uninsurable.
These traits are often associated with lower socioeconomic backgrounds, so while AI-enabled hyper-personalisation may benefit those of a higher socioeconomic background and provide better quotes, it might unduly penalise those at the other end, particularly vulnerable customers (a key focus for the FCA), exacerbating existing inequality.
Risk-pooling model
This contrasts to the risk-pooling model that currently exists in the insurance industry, which some argue should allow coverage of a wider range of customers. You can see risk-pooling in action if your company provides a private healthcare trust scheme – generally the cost of opting into the company scheme is cheaper than if you were to get private medical insurance individually.
This is partly due to the negotiating power of your company, but also because the risk is spread across the whole company made up of a number of different individuals with a wide risk profile. For example, a Big Four accounting firm generally has large numbers of younger employees who tend to experience fewer health issues, which offsets the costs of insuring older partners.
Round-table participants acknowledged that discrimination in the insurance industry has always existed – the costs of insuring an 18-year-old male driver versus an 18-year old female are a case in point. The key difference in this evolution is the granularity of discrimination enabled by AI and the unintended second or third order of effects.
The case of younger drivers
Unintended consequences can be further demonstrated in the case of younger drivers: they are generally considered high risk and have higher premiums by association. But they are also likely to be the least able to afford to pay these premiums – as a consequence, more may drive uninsured. This situation would ultimately increase the premiums of those that remain insured in aggregate because there is a smaller pool of those that have insurance covering the cost of claims.
Another point of contention identified in the round-table discussion is that currently, insurance’s mortality tables predicting the life span of certain sections of the population are fairly accurate, but with AI-enabled hyper-personalisation, it is foreseeable that life expectancy can be forecast with alarming accuracy at an individual level.
Moral dilemmas
Under existing UK Consumer Duty rules, regulated firms such as insurance companies have the duty to communicate outcomes to their customers. The question then begs, when it comes to explaining the reasons for denying cover, are insurance firms obliged to reveal the conclusion from data points considered, such as the life expectancy of said individual?
For the sake of argument, if the model could accurately predict the date and time of one’s death, from a moral standpoint should you even reveal this information – and would a customer even want to know?
Although the round table was focused on the ethical aspects of the use of AI in insurance in a specific scenario, it was acknowledged that there were also many benefits associated with the implementation of AI in the insurance industry.
In particular, automating parts of claims processing reduces administrative costs for the company and also leads to faster decisions for the customer. Using AI to help detect and combat insurance fraud could help reduce a big cost driver for the industry, although it can be an endless cat and mouse game as both fraudsters and insurance companies seek to use AI to gain an edge.
Potential solutions
To combat some of the issues identified, round-table participants agreed that firms (insurance or otherwise) should focus on outcomes and work their way back to ensure the control environment delivers and manages risks to an acceptable level. Doing so would help mitigate some of the unintended second order effects on both the industry and society as a whole of implementing this new technology.
Existing model risk management principals remain highly relevant and it is important that firms have a clear understanding of their models and data. They should embed responsible AI guardrails into their existing model risk management and regularly monitor the performance of models for biases. In the current environment, the importance of keeping a human in the decision-making loop was also stressed.
Managing third party risks
With the CrowdStrike IT outage still fresh in people’s minds, firms need to be cognisant in managing the risks of critical third parties, particularly if they are using foundation models provided by a third party as part of their AI strategy.
Malcolm Bacchus, President of ICAEW, who opened proceedings at the round table, says: “AI has the capacity for improving service delivery across all sectors, but it also has a significant capacity for destabilising markets and acting against the public interest.” He recommends that “a robust code of conduct backed up by good, but not excessive, regulation will be essential as AI becomes embedded into businesses”.
AI-enabled hyper-personalisation can be a force for good when it opens up access or improves outcomes or lowers cost. However, where goods are essential, such as access to insurance, and currently offered to a broad set of society, hyper-personalisation may have downsides. These can be mitigated by robust model risk management and by ensuring an outcomes-orientated focus when implementing new technologies.
AI and Analytics Live
Join your peers to explore the vital role that data is playing at the intersection of the two most transformative topics facing organisations in a generation: AI and ESG.