US-based Cential uses AI in many ways within its business. In particular, it has developed a governance, risk and compliance (GRC) model that integrates with big technology platforms in that space.
“We have built specialised use cases that perform workflows that a user would perform, but bring in AI to do the heavy lifting for them,” says David Ponder, Partner at Cential and CEO of Cential AI.
The Cential AI team has implemented these use cases in phases, developing them iteratively and improving them with the kind of controls that users need, such as privacy and anti-bias.
“We came to realise that, first and foremost, we need to make sure we understand the problem we want to solve,” says Jan Wentzel, Partner at Cential and COO of Cential AI. “I'll use third-party risk management as an example: it’s very important that we understand the challenges in that process, or if you do control documentation or risk documentation, understand the challenge and problems you’re trying to solve.”
By doing that upfront work around the purpose of these AI models, it has given the team a much clearer picture of how to apply and utilise AI, he explains. As the team started building the models with these problems in mind, the ways in which it could be used became more and more obvious.
“The first problem that we solved was the blank page problem, which is: I’m a user of a system, I start a new record and I’m staring at a screen with all these fields that I have to fill in,” says Ponder. “Often, it’s an exercise in Google searching and building out the record, back and forth from your search engine to gather more information and put it in.”
Instead of that desk research exercise, Cential developed an AI model that can fill in the fields of a risk management form with minimal information – for example, a company name and a region. The model will fill in information users would need for a risk management profile, such as primary and secondary industries.
“It was low-hanging fruit,” says Ponder. “Why should a human have to go back and forth between Google and find each one of these pieces of information? So that was one of the first use cases that we implemented.”
It’s a simple use case, but being in the GRC space, it was important that the model minimised the risk of bias or hallucination in the results it generated. The team needed to develop specific controls to ensure that it maintained certain levels of accuracy and privacy.
“By applying it into a business use case, you need to make sure you’re standardised,” says Wentzel. “You limit the number of hallucinations. We needed to really think through how we present to the end user. How do we take what the end user has, create the prompts and get the feedback back to the end user without requiring them to do all those additional questions?”
Getting consistency and accuracy comes down to how you structure your prompts, he says. Cential standardised prompts by simplifying the fields. Users only need to fill out a single field prompt to generate information, with additional fields to add greater context. The Cential team developed these prompts through extensive testing, says Wentzel.
“This has been a journey that we’ve been going through for some time. At first, we were interacting directly with the large language model, which is almost like interacting with Google or something like that. Then because of data privacy concerns, we started developing a private AI assistant, which interacts with a generative AI model while keeping all interactions and data within the assistant.”
At least one company that Cential works with puts out a system and organisation controls report where they get audited by a third party, Ponder explains. Using an AI assistant allows them to maintain a data privacy guarantee while also giving them more control over how the model operates.
The assistants themselves can be trained; they can be given instructions,” says Ponder. “There are settings on how creative you want to make them. There are settings on how verbose or concise you want them to be.
“We built some data scrubbing into the AI hub, where you could whitelist and blacklist terms, where it would recognise patterns and scrub data. It still has that functionality, but it’s much less important when you have your own private assistant.”
Getting to this point took a lot of testing and refining of the tools that Cential was developing. Starting with an open generative AI model allowed them to avoid a huge amount of AI training, as those AIs have already been trained on the necessary information. Then the team was able to focus on tools that managed how a user interacts with those models. “To give you an idea, when we started building the assistant, I could say to the assistant ‘use the COSO Framework’ – I don't need to upload the COSO Framework to it,” says Wentzel.
By focusing on training the assistants, Cential has been able to “dial them in” for very specific use cases. “We have one where we’re generating policies, and it needs to be a little bit more creative when it’s generating policies,” Ponder explains. “We give it a template that we want it to follow so it doesn’t get too creative, but it needs just enough creativity to generate those policies. But when you’re filling out the third-party profiles, we don’t want it to be creative at all. We want it to give the data that it knows or that it can find.”
You can also give it instructions: one use case involved document reviews, where one of four conclusions was expected. “We noticed that, in the logic below it, it was using some of what would be considered key or reserved words. Words have different contexts, especially when you’re using them in a professional manner. So we had to train that assistant to never use certain words,” says Ponder.
Having developed a number of successful use cases, Cential is now looking at developing more advanced tools, based on everything they’ve learned. Says Wentzel: “We’re working through how to get through the technical limitations and build a foundation that we can build all kinds of use cases from.”
Accounting Intelligence
This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.