Ethics is BDO’s USP, according to BDO Ethics Partner David Isherwood. It drives the firm’s achievements in both professional and enterprise levels, and among its employees.
Isherwood explained BDO’s Ethics Watermark at an ICAEW conference for Global Ethics Day, hosted at Chartered Accountants’ Hall. It’s a framework ensuring that ethics are embedded from the start in all the firm’s new policies, systems, services and relationships. The Watermark spans the firm’s operations, covering governance, capabilities and controls.
“It’s all about thinking ethically at the right time,” Isherwood said. “And that’s upfront. An architect wouldn’t design a bridge without first following rules of structural integrity. If the architect ignored those rules, they’d only have to mar the bridge’s beauty later on by adding ugly columns and girders. Our world is not dissimilar. Once we’ve embedded ethics into the foundations of our products and services, we can design them with freedom of thought.”
That ethos is proving essential as the firm increasingly adopts artificial intelligence (AI).
Supporting agility
BDO recognises that AI is a nascent environment, Isherwood said. While legal frameworks are emerging in the US, EU and UK, they are far from mature. While BDO has a grasp of where AI sits in relation to more established laws around data protection and copyright, for example, those sands are likely to shift. In parallel, stakeholder expectations of what AI ought to deliver are developing sharply.
For Isherwood, the firm’s choice is clear: “We could either avoid the risks in the field and not use AI at all, or we could show that we have strong governance and processes around AI so that as an enterprise, we understand how and where it’s being used. That way, when those laws, regulations and expectations evolve, we’re agile enough to pivot quickly.”
To assist employees’ confidence, BDO has convened a special AI Work Enabling Group to guide the firm’s AI implementation. The unit evaluates AI projects against existing policies, determines necessary staff training and ensures ethical considerations are built into every stage of development.
Responsible path
BDO is currently rolling out a flagship, internal AI project called 'Personas: a generative AI (GenAI)' assistant for employees. For BDO Chief Innovation and Digital Officer Dan Francis, Personas is an example of technology that will transform the profession over the next decade.
“This is only just getting started,” he said. “At present, barriers to entry are incredibly high. It costs billions of pounds to create the core technology models and they have very long lead times. Yet the products and services that stem from those models and sit on top of them are moving really quickly. And all of us have free access to many of those tools. Anyone can download, compile and run open-source large language models.”
Francis also highlighted the emergence of ‘agentic’ AI: autonomous systems capable of replacing humans in specific tasks across industries, including his own. “All of this requires a ‘responsible AI’ approach, and appropriate governance,” he stressed.
BDO’s responsible AI framework ensures systems are secure, private by design and clearly defined in their purposes and limitations. ”Enterprise-wide change management, including communications, training and feedback, helps thousands of employees use the technology safely,” Francis said.
Francis urged attendees to seek out credible advice on responsible AI approaches – noting that free resources are available from Microsoft, the UK government and ICAEW.
‘Human in the loop’
BDO’s presentation gave way to a broader panel discussion on AI ethics, moderated by ICAEW Ethics Standards Committee Chair Vicky Andrew. Joining her on the panel were Nick Patterson, Senior Policy Officer in the Innovation Hub of the Information Commissioner’s Office (ICO); and from King’s College London, Visiting Lecturer and AI ethics law specialist Cari Hyde-Vaamonde and Associate Professor in AI, Dr Yali Du, A Turing Institute Fellow.
Asked how professionals should handle personal data while working with AI, Patterson said that the ICO urges users to adopt a ‘data protection by design’ approach to mitigate risks before they become real-world harms. He noted: “A lot of the ethical challenges you will face in the development and deployment of AI systems using personal data are addressed in a legally binding way by statutory principles under the UK GDPR. As those principles are statutory, they’re not just best practice – but rules you’re required to abide by.”
Hyde-Vaamonde fielded a question on how responsibility for inaccurate or harmful outputs from AI tools should be allocated among developers, users and organisations. In her view, it is not necessarily possible to know for sure whether an AI system is ‘built properly,’ partly because of its sophistication. However, she said: “You can take a look at the dataset that has gone into the tool, which may show you there’s a problem. Certain subjects may be overrepresented, which could lead to imbalances in the outputs.”
Finally, Dr Yali Du was asked about what the Responsible AI concept of the ‘human in the loop’ means for accountants. She noted that users can steer and refine the functionality of their chosen AI tools by offering them positive or negative rewards, bringing them more in line with human objectives. In a finance context, she said, that means accountants can train AI tools to align with their risk appetites.
Accounting Intelligence
This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.