Understanding how we describe trustworthy, responsible and ethical AI
Less than half (44%) of UK businesses using AI are confident in their ability to demonstrate compliance with government regulations, according to a new report released by the Department for Science, Innovation and Technology. DG Cities contributed to this research, published under the Responsible Technology Adoption Unit, which highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton, who led the research, explains why the words we use to define emerging tech matter.
In a world already full of jargon and buzzwords comes AI to generate its own. Almost overnight (although those in the field will no doubt argue otherwise) business has had to run to keep up, as new terms, such as gen-AI have entered the lexicon. Of course, the day-to-day use of jargon might be irritating, but beneath it lies a critical challenge: within the AI space there is no clear language that people believe, understand and trust.
Nowhere in the AI field is language more important than in the space of AI assurance. Put simply, assurance is the practice of checking something does as it is designed and intended. For businesses using AI, assurance is critical in assessing and validating the way AI uses business or consumer data. In regulated industries like banking, AI assurance is becoming a key requirement of responsible practice.
At DG Cities, we were recently commissioned by DSIT to explore assurance language as part of the UK government’s push to create the UK’s AI assurance ecosystem. Our aim was to engage with UK industry to understand the barriers to using assurance language, and the importance of standardised terms to helping businesses communicate with their customers and stakeholders. We surveyed over 1,000 business leaders and interviewed 30 in greater depth to explore their views.
What we found gives an interesting picture of this emerging space. We found excitement and interest in making use of AI, but concerns over doing the right thing. For example, almost half (44%) didn't feel confident they were meeting assurance requirements from regulation. The reasons for this were numerous, but consistent themes were: lack of clear terms, and lack of UK and international standards.
We also spoke to the public sector about assuring AI when working on public services, including in local government. Here similar issues came up: lack of knowledge in how to assure AI, and terms that were inconsistent. We believe this a barrier to the safe adoption of AI in sectors where it could have major value.
It's great to see our work for DSIT now shared. We think this is a massive opportunity for the UK to lead globally, to create AI assurance businesses and tools that are designed to ensure AI remains safe and trustworthy, and that ensure the public is always protected when AI is used.