Last month, the Department for Science, Innovation and Technology released the report Assuring a Responsible Future for AI. This important piece of work highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton looks at the importance of choosing the right words for assurance, and takes an in-depth look at some of the trust issues our research discovered.
Humans are hard-wired to consider trust, whether we’re buying a new car, meeting a stranger, or even understanding when to cross the street. We’re constantly assessing the world around us and deciding whether or not, through a decision we or someone else makes, we’re likely to benefit or come to harm. The problem is that humans aren’t always great at knowing what should and shouldn’t be trusted.
Trust, or more specifically trustworthiness, is a central element in the field of AI acceptance.
Trustworthiness, defined as displaying the characteristics that demonstrate you can be trusted, is gold dust to those looking to make use of AI in their tools and services. Tech designers go out of their way to make sure you trust their tools, because without trust, you’re very unlikely to come back. UX designers might choose voices that convey warmth, or use colloquialisms and local language to help ease interactions and build rapport. In text-based interactions too there’s a need for trust – some tools might use emojis to appear more authentic or friendly, others might seek to reassure you by providing references for the answer it generated. These are all methods to help you trust what AI is doing.
The issue, however, is that trust in AI is currently being fostered by the tools seeking your engagement. This obvious conflict here means that people using AI, whether employees or consumers, may be placing their trust in a risky product or tool – and in an emerging market that is evolving at pace, that creates real risk.
Understanding the risk AI presents, and the language used by business to assure products, was the topic of our most recent study for government. The DG Cities team undertook research for the Responsible Technology Adoption Unit and Department for Science, Innovation and Technology exploring AI trust from the perspective of those buying and using new products in the field today to understand what AI assurance means to them – and to understand their needs in order to assure new tools coming to the market. Our approach explored how AI tools are currently understood, and key to people’s understanding was the concept of fairness.
Understanding fairness of AI tools
For AI tools to be used safely, there’s a need to ensure their training is based on real world data that represents the reality in which the tool is likely to operate, but which also protects it from making decisions that are biased or limit outcomes. We found an example of the reality of “good bias” vs “bad bias” when exploring the use of AI in recruitment technology – here, bias from both objective and subjective measures is considered to drive a hiring decision – but for those using the tool, there is a need to ensure there is no bias related to protected characteristics. This challenge is an area where fairness comes to the fore:
“Fairness is the key one. And that intersects with unwanted bias. And the reason I try and say ‘unwanted bias’ is that you naturally need some (bias). Any AI tool or any kind of decision making tool needs some kind of bias, otherwise it doesn't produce anything. And so, I think front and centre is how does it work, does it work in the same way for all users?”
- private sector procurer
You can imagine a similar scenario playing out in a local authority setting in which resident information is used to asses housing allocation, or drive retrofit and improvement works to social housing stock. Here, bias must be understood to ensure the tool is delivering value to all groups – but with the introduction of certain criteria, an equitable approach may be created, whereby certain characteristics (e.g. low income, disabilities) are weighted differently. Fairness here is critical – and is a major reason why assurance processes, including bias assessments, and impact evaluations, are key practices for local authorities to build their capabilities in.
Making AI assurance more accessible
UK public sector bodies and businesses of all sizes are going to need to ensure the AI tools they are using are fit for purpose – without steps in place to make the checks needed, there is real risk of AI being used incorrectly and potentially creating harm.
Defining terms is important for several reasons, not least because without clarity and consistency, it is likely that those involved in the development, implementation, and regulation of AI technologies may find themselves speaking at cross purposes. Clear terms, used in agreed ways, help prevent misunderstandings and misinterpretations that could lead to errors or inefficiencies.
Well-defined terminology is also crucial for establishing ethical guidelines and legal standards. It allows policymakers to create regulations that address specific aspects of AI, such as privacy, bias, and accountability, ensuring that AI technologies are developed and used responsibly. Terminology related to AI assurance practice must convey requirements for legal standards, but as we’ve found from our engagement with industry for DSIT, this issue of terminology prevents business of all sizes from understanding what they need.
“Is the language of AI assurance clear? I don't know whether it's the language per se, I think there's probably a lack of vocabulary… to me it's a question of ‘what are you assuring? What are you trying to show that you've achieved?’ And that all stems from: ‘what does the public want from the technology, what do they not want, what do regulators expect to see, how much evidence is enough evidence?”
- private sector procurer
Assurance language that is clear and well understood is also a pillar of effective risk management.
By precisely defining terms like "bias," "transparency," and "explainability," businesses and their stakeholders are far more likely to understand potential risks and take action to limit their potential impact. Shared meaning between leaders, teams, suppliers and clients is important if issues with AI are to be tackled in an appropriate way.
Finally, and perhaps most importantly, without clear AI assurance terminology, it’s unlikely that AI technologies are to be widely accepted and trusted. Assurance is one of the key mechanisms through which public bodies and businesses convey the trustworthiness of AI to the public. This is where clear terminology can be most powerful – it helps to demystify complex concepts, making AI more accessible to non-experts and increasing public trust. It’s also important in demonstrating the trustworthiness of brands – not only private sector businesses, but also local government.
Being a trusted source of information
As our research highlights, there’s a lot to be done in business and public sector to share and learn about AI tools and services in reality. At DG Cities, this is the kind of role we’re playing with authorities today to make sense of a complex and changing field. If you’re keen to learn more about what AI tools are in the field, and the types of assurance steps you should take to make better decisions on AI, get in touch.