DG Cities

View Original

Guest blog: Ed Houghton shares four vital steps towards public trust in AI for LOTI

Last year, DG Cities was commissioned by the Department for Science Innovation and Technology to research AI assurance in industry, and to investigate the language used to describe the approaches to evaluating AI in different sectors. This work formed part of the government’s report, Assuring a Responsible Future for AI, published in November. In a guest blog for LOTI (London Office of Technology & Innovation) Ed Houghton, who led the research, draws practical lessons from some of the key findings.

Transparency is a cornerstone of good governance, yet many public processes still feel opaque to many citizens. As AI increasingly shapes decision-making, questions arise about how open government and transparent democracy can thrive when most AI systems remain closed and complex. This is where AI assurance plays a crucial role.

AI assurance refers to the processes that ensure AI tools are used as intended, working effectively, ethically, and without harm. It’s particularly vital in local government, where public trust and service effectiveness are paramount.

Our research explored how AI assurance is understood across sectors. Through a national survey of over 1,000 leaders and interviews with 30 managers, the research identified key steps for maximising the benefits of AI safely and transparently. These include defining common AI terminology, fostering cross-department collaboration, prioritising continuous evaluation, and engaging communities to build public understanding and trust in AI systems.

Read the full piece on LOTI’s website.