Safety drives public support of automated vehicles

As we release the latest DeepSafe research findings into public perception of automated vehicles, Behavioural Scientist, Lara Suraci explores some of the insights behind the data. What did our results tell us about the public’s understanding of road safety - and how might this inform messaging when it comes to advancing automated vehicles on UK roads?

Automated vehicles (AVs) promise more efficient travel, reduced emissions and greater accessibility for all, but what role does safety play in shaping public acceptance?

Let’s put AVs to the side for the moment and talk about road safety in the UK as it stands. In 2023, the Department for Transport puts the number of people who were killed or seriously injured (KSIs) on UK roads at a sobering 29,711, with vulnerable road users such as pedestrians, cyclists, and motorcyclists disproportionately affected.

A major contributing factor in these incidents is human error: it is estimated that 88% of road traffic accidents in the UK are caused by human error, including behaviours like speeding, distraction from mobile phones, running red lights or failing to give way.

In light of these numbers, it may come as a surprise that a survey study DG Cities conducted in December 2024 indicates that many do not consider improving road safety a priority: when we asked a representative sample of 1,000 UK respondents to select their top two priorities for the UK transport system, only 37% chose road safety; a low proportion compared to the 53% who chose affordability and the 46% who chose nationwide equality.

One possible reason for this is that the above statistics are not, in fact, common knowledge: when asked to guess the number of KSIs on UK roads in 2023, a staggering 92% of our sample reported an estimate below the official figure and the average estimate only came to 11,402 – less than half of the official figure. In other words, road safety may not be at the forefront of public concern because, quite simply, many are not aware of the scale of the problem.

How does this compare to safety expectations of AVs?

In 2024, the UK Government established the following safety benchmark for AVs: in order to be allowed to operate on UK roads, AVs should achieve an equivalent level of safety to a ‘competent and careful human driver’. Aside from the inherent ambiguity of this concept, our research suggests that public support will require AVs to not only meet but exceed this benchmark. Presented with an AV that is slightly less safe than a competent and careful human driver, a negligible 3.7% of our respondents indicated their willingness to use it; however, once the AV in question is as safe as a competent and careful human driver or slightly safer, this proportion jumps to 36.8% and 56.5%, respectively. Only once the AV is much safer than a competent and careful human driver do 3 out of 4 respondents report their willingness to use it.

Similarly, a second survey study from February 2025 revealed that the proportion of respondents who would support the widespread introduction of AVs on UK roads almost doubles when AVs promise a 10% reduction in the number of KSIs compared to no change – so safety gains over human drivers are not only a prerequisite for acceptance, but also an effective tool to boost public support of AVs.

One possible reason for this is that the above statistics are not, in fact, common knowledge: when asked to guess the number of KSIs on UK roads in 2023, a staggering 92% of our sample reported an estimate below the official figure and the average estimate only came to 11,402 – less than half of the official figure. In other words, road safety may not be at the forefront of public concern because, quite simply, many are not aware of the scale of the problem.

How does this compare to safety expectations of AVs?

In 2024, the UK Government established the following safety benchmark for AVs: in order to be allowed to operate on UK roads, AVs should achieve an equivalent level of safety to a ‘competent and careful human driver’. Aside from the inherent ambiguity of this concept, our research suggests that public support will require AVs to not only meet but exceed this benchmark. Presented with an AV that is slightly less safe than a competent and careful human driver, a negligible 3.7% of our respondents indicated their willingness to use it; however, once the AV in question is as safe as a competent and careful human driver or slightly safer, this proportion jumps to 36.8% and 56.5%, respectively. Only once the AV is much safer than a competent and careful human driver do 3 out of 4 respondents report their willingness to use it.

Similarly, a second survey study from February 2025 revealed that the proportion of respondents who would support the widespread introduction of AVs on UK roads almost doubles when AVs promise a 10% reduction in the number of KSIs compared to no change – so safety gains over human drivers are not only a prerequisite for acceptance, but also an effective tool to boost public support of AVs.

Relative Safety Expectations of AVs

How can we reconcile this deprioritisation of road safety in the general transport system with the fixation on high safety standards for AVs?

In addition to misconceptions about the current state of road safety in the UK, safety as a baseline requirement may be overlooked in everyday considerations of the transport system, but shift into focus when a disruption to the status quo – such as the introduction of AVs – is debated. Ensuring adherence to high safety standards in human drivers may also feel like a complex and unattainable goal due to the involvement of unpredictable factors like individual preferences and behaviour. Machines like AVs, on the other hand, which can be programmed to follow set rules, may offer a clearer path to safety – and thus trigger much higher expectations thereof.

But perhaps the most important question to consider is not why this discrepancy exists but how we can leverage it to foster public support for AVs and, ultimately, harness their potential for increased road safety.

While safety gains over human drivers are often cited as a key selling point for AVs – and seem to be effective in boosting public AV support – this argument may not be as powerful as it could be if the public continues to underestimate how unsafe UK roads are. In fact, the perception that road safety is not as critical a concern as statistical evidence suggests might lead to less urgency in the adoption of AVs.

Understanding and leveraging this knowledge gap could therefore play a pivotal role in advancing public support for AVs. It presents a crucial opportunity for public education and awareness campaigns that address misconceptions regarding the current state of road safety in the UK and contextualise it with regard to the true potential of AVs to make our roads safer.


Learn more

This public engagement research takes a closer look at public attitudes towards automated vehicles and expectations regarding their safety. We use approaches from behavioural economics to assess the impact of safety messaging on increasing acceptance.

Our staircase model assessed how safety statistics can be used to support acceptance of automated vehicles onto UK roads, and uncovered how different demographics engage with information on automated vehicle safety. 

Read our summary or download the full report here.

Do we need a public information campaign on climate change - and how might it work?

At DG Cities, we focus on working with the public to advance the adoption of new technologies, services and deliver net zero neighbourhoods. Central to this has to be a shared understanding of ‘why’ – the need to shift to a low carbon economy to mitigate the impacts of climate change and a desire to make the places we live and work better for everyone. We have been reflecting on the role of government information in reinforcing this need for change, and how our work with communities might offer useful lessons in countering disinformation, mistrust and AI-generated content.

What was the last public information campaign you remember seeing?

“Hands, Face, Space?”

Keep Britain Tidy?

Maybe a terrifying warning not to boil a kettle on a boat?

Historically, these campaigns have covered everything from seatbelts and road safety to preparedness for a missile strike, yet we have yet to see a full-scale, government-led campaign on the climate crisis linked to specific actions we should take, such as transport and consumer choices.

Hands, Face, Space Covid-19 messaging, 2020

The demand, however, appears to be there – a petition to Parliament calling for a campaign has, at last count, more than 23,000 signatures.

But what would the delivery of an effective climate crisis campaign look like in 2025?

Gone are the days when a sternly narrated television ad, a page in a newspaper or a well-placed billboard were enough. In a world shaped by AI-generated misinformation, hyper-personalised digital content on social media, and polarised public discourse, campaigns must be smarter, more localised and, we believe, more deeply rooted in behavioural science.

Today, campaigns are often digital-first, as daily newspapers and scheduled ad breaks on tv channels become less of a shared routine. The NHS’s COVID-19 ‘Stay at home’ messaging was a rare modern example of a multi-platform campaign that achieved mass adoption, using a mix of clear messaging, emotional appeal, and trusted spokespeople. With the rise of misinformation, it’s no longer enough to simply broadcast a message and find ways to reach the majority of people. As well making it heard, it’s vital to ensure the information is trusted, believed and acted upon.

Government THINK road safety campaign

Misinformation in the age of AI

Public information campaigns now face unique risks. AI-generated deepfakes and viral misinformation can drown out official messaging, creating confusion and distrust. Research by Ofcom found that around one-third (39%) of UK adults had recently seen misinformation online. If AI can generate convincing fake news or misleading climate narratives rapidly and at scale, how do we ensure the public gets the right message?

One approach is to lead with transparency. Government campaigns must be upfront about how and why information is being shared, and ensure that sources are clear and credible. The rise of AI also means actively combating misinformation, working with tech platforms to identify and counter false claims. Meta’s announcement that it would no longer support fact-checking in the US has alarmed many.

Initiatives in the UK, like the rebranded National Security Online Information Team will be essential in ensuring climate messaging is both visible and trusted. This is something we touched on at DG Cities in our work on AI Assurance for the Department for Science, Innovation and Technology. Here, our research focused on identifying common language used to describe trust in AI tools – the shared basis from which policy and public guidance can develop.

Agreeing the message – and who should deliver it

With so many strands to the climate crisis, identifying the behaviours that will make the greatest difference, as well as the behaviours that a public information campaign is most likely to shift, is politically as well as practically challenging. Government trust is also fragile. According to the 2024 Edelman Trust Barometer, trust in government and the media is at its lowest point for a decade. That’s why who delivers a public message now matters as much as the message itself.

From our experience working on behaviour change campaigns with local government, we have seen the difference a hyper-local approach, where people feel the message is relevant and directed at them personally, can make. We believe that making climate change campaigns local by design, so they are delivered by councils, community leaders, or even trusted local businesses, could be a useful principle. Our work with the Royal Borough of Greenwich supporting communities to recycle more and reduce fly-tipping is a good example of this. We used simple messages, designed by the community, aligned to clear design principles, with scope for local nuance.

Our project involved locally created murals in Greenwich

Research in behavioural science also consistently shows that people are more likely to adopt new behaviours when the message comes from a source they trust. For example, vaccine uptake increased when delivered through community healthcare providers rather than through broad national messaging. A climate campaign could adopt a similar model, particularly when local interventions speak directly to the people most affected: a council-led campaign in areas prone to flooding that provides real-time climate data, or targeted messages in rural communities explaining the economic and energy security benefits of wind farms.

When we consume so much media, can a campaign ever have the same impact?

In an era of endless digital distractions, making a campaign stand out is harder than ever. This is where innovative design, behavioural psychology, and emerging technologies come into play.

One method is using ‘disruptive design’, by which we mean unexpected interventions that interrupt routine behaviour. Data at its most incorruptible and clear could be the message. In London, we have cycle lane counters that show number of users per day. The Netherlands experimented with digital billboards that respond in real-time to pollution levels, showing a visual representation of air quality. Similar dynamic campaigns could be used in UK cities, making climate data visible and immediate. In the UK, the Body Shop even experimented with the medium as the message, by installing billboards designed to actively remove pollutants from the environment.

Augmented reality (AR) and AI-driven interactive campaigns also have potential to capture people’s imagination. Imagine scanning a QR code at a bus stop and instantly seeing how climate change could impact your neighbourhood in twenty years’ time. These kinds of interventions bridge the gap between abstract global issues and personal, immediate impact.

The future of public campaigns

To be effective in the AI age, it’s clear that public information campaigns must evolve. The principles here are local, interactive, trustworthy, transparent and impactful.

If the government is serious about tackling the climate crisis by uniting the country with a greater understanding of the need to adapt our behaviours, a national information campaign is overdue. But to work, it must be unignorable, trustworthy, and smart enough to navigate the new landscape of AI-powered influence and polarised politics. Because in the end, the real challenge isn’t just what we say, but how, when, and where we say it and whether it leads to meaningful change.


To learn more of our Behavioural Innovation practice, read our latest brochure, which introduces our unique approach and recent work. If you are a council looking for an effective, affordable way to improve resident satisfaction and deliver change in your area, get in touch for a chat to see how we can help.

Will the Government’s AI Action Plan really deliver for UK workers?

The Government’s bold new plans - and even bolder use of language to ‘unleash AI’ - made the rounds in recent weeks’ headlines, but beneath the gleaming potential and the lofty optimism lies a critical question: will the rapid advance of AI lift UK workers as it claims, or leave them behind?

Drawing inspiration from Nobel laureates Daron Acemoglu and Simon Johnson’s Power and Progress, this piece by DG Cities’ AI specialist and Graduate Consultant, Nima Karshenas dives into the hidden risks of automation-driven displacement. By examining historical lessons and the blind spots in the unveiled AI policy, he uncovers how thoughtful procurement strategies can ensure that progress works for the many — not against them.

With AI investment growing at unprecedented speeds, governments are scrambling to stake their claim in this transformative market. The UK, under Labour’s growth-centric strategy, has every reason to push ahead. Early movers in AI have the potential to establish themselves in global markets and reap the economic rewards that come with it.

This pressure to act quickly has driven a liberal, growth-first approach. The plan’s emphasis on attracting investment, building infrastructure, and establishing initiatives like the AI Safety Institute reflects a strong focus on cultivating an ecosystem that supports the industry. But economic safety—the security of workers in the face of automation—remains largely absent.

Unpacking the Government’s assumptions

In his speech, The Prime Minister really drove home that the opportunity plan was going to deliver for workers, and this, through inspection of the corresponding policy document, is resting on the following assumptions:

  1. AI drives the economic growth on which the prosperity of our people and the performance of our public services depend;

  2. AI directly benefits working people by improving health care and education and how citizens interact with their government[1];

  3. The increasing of prevalence of AI in people’s working lives opens up new opportunities, rather than just threatens traditional patterns of work.

At first glance, these ideas seem promising, but history tells a more cautionary tale. A discussion of the first assumption is where we first interact with and explore the concept of the ‘productivity bandwagon’ outlined by Acemoglu and Johnson in Power and Progress. The productivity bandwagon outlines a commonly accepted principle in economics that when there is a breakthrough or improvement in technology, this leads to increased productivity, that in turn translates into an improvement in worker conditions through wealth creation.

By examining two historical examples outlined in their research we able to take a more critical lens on this assumption: 

The productivity bandwagon process illustration

The power loom era

The automation of weaving by the power loom displaced skilled hand weavers. While productivity soared, the resulting wealth accumulated among capital owners, not workers. Displacement without task creation left workers with lower wages, harsher working conditions, and limited agency for the next 60-70 years.

Illustration of the power loom at work. Source: Hulton Archive/Stringer/Getty Images

 

The digital revolution

Figure 3: Real Log Wages by education level in the United States (source: Autor, David. 2019. "Work of the Past, Work of the Future." AEA Papers and Proceedings, 109: 1–32.)

The rise of computers and automation starting in the 1970s promised greater efficiency, but for many workers (in the US) — and especially those without university degrees — real wages stagnated or even declined. The benefits of increased productivity were concentrated among the highly educated and capital owners, worsening income inequality.

This rise in automation was coupled with unprecedented neo-liberal tax reforms that were rooted in ‘trickle-down’ economics, that no doubt amplified income inequalities, therefore it’s difficult to directly attribute the fall in real wages to automation.

These examples reveal the key flaw in the productivity assumption: while technological advances drive productivity, they don’t guarantee better outcomes for workers. For that to happen, we must actively shape the conditions under which productivity gains are shared.

Creation or displacement?

The real question isn’t whether AI can increase productivity — it undoubtedly will — but what type of productivity we are fostering. Productivity that creates new tasks and industries can generate opportunities for workers. In contrast, productivity that automates existing tasks often leads to job displacement, pushing wealth upwards rather than spreading it across the economy.

This differentiation is critical. 

The plan’s third assumption - AI’s ability to create new opportunities - recognises this challenge, but doesn’t address it head-on. The AI Opportunity Action Plan relies on market forces to create these new opportunities, ignoring the lessons of the digital age. Without targeted policies, there’s no guarantee the market will fill the gaps left by displaced jobs, especially under the deregulatory stance outlined in the plan. 

The positioning the government has taken comes as a greater surprise, given the threats identified on lower-skilled jobs in the 2021 report by the Department for Business, Energy and Industrial Strategy - an analysis conducted on the back of Frey and Osbourne’s gloomy prediction in 2019 that around 35% of UK jobs were at high risk of being automated by computers. Although the estimated scale of the impact of automation is yet to materialise, there is no doubt that the recent rapid advancements in AI are going to accelerate this transition in labour demand, and the government needs an AI strategy that prioritises the economic consequences we can no longer ignore.

Nonetheless, by being conscious of the AI products we procure and develop, as organisations we can capitalise, excuse the pun, on the productivity that AI offers without displacing workers. The key message to drive home here is that as organisations, we need to procure AI products that Augment and Create instead of Trimming - but what does this actually mean, and how can this be built into procurement processes?

AI to Augment & Create (A&C)

What does it mean and what benefits does it bring to an organisation?

Now, whether an AI tool is augmenting and creating is entirely dependent on the context of each organisation, AI tools that automate certain tasks can in fact be augmenting and creating, seemingly a contradictory statement based on all that’s been discussed but leads to perhaps the most important distinction. 

Every organisation must first take a critical look at their current operations and evaluate the impact automation will have on them. An example here to best demonstrate - if there are critical datasets that are held by your organisation, but could not previously extract the value from because of the extensive cost and resources attached to cleaning, sorting and structuring them, and AI tools can help automate that process at fractions of the cost, then you are bringing value to your organisation without trimming your operations. The distinction lies in understanding how AI interacts with organisational operations, as the same AI tool can streamline one organisation's operations while augmenting another's. It's not a one-size-fits-all solution, but rather a nuanced approach that requires a critical understanding of AI's role within the organisation.

A few examples of the kinds of tools we are talking about:

1. AI-Powered Data Querying and Insight Generation

AI tools can process complex queries across vast datasets, identifying actionable insights that support better decision-making. For example, local authorities might use such tools to analyse housing or transportation data, uncovering trends that inform smarter policy decisions. Similarly, businesses can employ AI to assess operational data, optimising strategies based on clear, data-driven insights.

2. Patient Health Summaries for Healthcare Professionals

AI can consolidate and summarise patient health records, providing doctors with concise yet comprehensive overviews of a patient’s medical history. This enables faster, more informed decision-making, improving treatment outcomes. Additionally, AI transcription tools can handle administrative tasks, such as updating patient records, freeing doctors to focus on seeing more patients and handling critical cases.

3. AI-Driven Sentiment Analysis for Public Engagement

Previously, robustly analysing public sentiment toward local plans or policies was challenging with standard techniques. AI now enables the processing of large volumes of feedback—be it survey responses, social media comments, or public consultations—to evaluate sentiment at scale. This ensures that community perspectives are integrated into the design and planning of local spaces, allowing for longer, more thoughtful, and inclusive planning processes.


Shaping AI procurement around augmentation and creation is not just a safeguard against workforce displacement, it’s a strategy for making organisations smarter, not just more efficient. This approach fosters a healthier work environment, supports long-term growth, and ensures institutional memory is preserved. 

A smarter organisation: AI tools that augment decision-making provide workers with enhanced analytical capabilities, leading to more informed strategies and better long-term outcomes, improving productivity without compromising

A healthier work environment: Reducing repetitive tasks allows employees to focus on creative and high-value work, improving job satisfaction, fostering professional growth, and attracting high-level talent.

Long-term growth: Prioritising augmentation ensures businesses don’t just chase immediate efficiency gains but develop resilient, adaptable teams equipped for the future.

Institutional memory preservation: AI tools that work alongside employees rather than replacing them help retain and structure knowledge within an organisation, mitigating the risks of staff turnover, and an over-reliance on black-box technologies.


A&C Procurement Framework

Impactful procurement thrives on continuous learning and iteration, which we've embedded into a dynamic framework that integrates A&C principles.

A final note…

While we welcome the Government’s initiative in recognising the transformative potential of AI for the UK economy, considerable care needs to be taken in policy development to avoid repeating the mistakes of history. Economic growth alone does not necessarily lead to better outcomes for UK workers, and without thoughtful intervention, the benefits of AI risk further widening income inequalities and lowering real wages among UK workers.

A procurement strategy that prioritises AI tools which augment human capabilities and create new opportunities will not only safeguard the economy against growing inequalities but also deliver long-term, robust benefits for orgaget in touchnisations. While automation is not an inherent hindrance to the  economy, understanding where and when to apply it is critical to the sustainability of both businesses and the wider economy. 

 

At DG Cities, we help organisations navigate this evolving landscape, identifying, demystifying and implementing AI solutions that deliver impact on the ground, drive sustainable growth, whilst protecting the workforce. By embedding A&C principles into procurement, we can shape an AI-driven future that works for everyone. If you would like to continue this conversation, or get in touch about how we can help with your AI procurement, then please feel free to get in touch!


[1] We largely agree with this assumption, provided careful design of the AI products, but discussion is outside the scope of this blog.

Guest blog: Ed Houghton shares four vital steps towards public trust in AI for LOTI

Last year, DG Cities was commissioned by the Department for Science Innovation and Technology to research AI assurance in industry, and to investigate the language used to describe the approaches to evaluating AI in different sectors. This work formed part of the government’s report, Assuring a Responsible Future for AI, published in November. In a guest blog for LOTI (London Office of Technology & Innovation) Ed Houghton, who led the research, draws practical lessons from some of the key findings.

Transparency is a cornerstone of good governance, yet many public processes still feel opaque to many citizens. As AI increasingly shapes decision-making, questions arise about how open government and transparent democracy can thrive when most AI systems remain closed and complex. This is where AI assurance plays a crucial role.

AI assurance refers to the processes that ensure AI tools are used as intended, working effectively, ethically, and without harm. It’s particularly vital in local government, where public trust and service effectiveness are paramount.

Our research explored how AI assurance is understood across sectors. Through a national survey of over 1,000 leaders and interviews with 30 managers, the research identified key steps for maximising the benefits of AI safely and transparently. These include defining common AI terminology, fostering cross-department collaboration, prioritising continuous evaluation, and engaging communities to build public understanding and trust in AI systems.

Read the full piece on LOTI’s website.

 


How are the January resolutions going?

New year, new you? Or not.

It’s not always easy to go it alone when it comes to sustaining resolutions. Our behavioural innovation team takes a neighbourhood-level approach to changing behaviours, working together with communities to bring about the positive change they want to see in their areas.

DG Cities’ Behavioural Innovation approach draws on methodologies from behavioural science, service design and place-based research to overcome urgent environmental, social and economic challenges.

Our website is full of useful tips, articles and case studies, including:
- Behavioural Economist, Leanne Kelly's five points to consider to make change stick.
- A look at travel and behaviour change & the journey from intention to action.
- How can you be sure it's ethical?
- Understanding consumer barriers to tech, particularly self-driving cars.
- A conversation with Dr Sanchayan Banerjee, the leading expert on nudge theory.
- How we're putting principles into practice, such as boosting cycling uptake in Stevenage.

The team is looking forward to tackling new challenges in 2025, so get in touch to see how we can help your council or organisation: [email protected]

Find out more about our work, recent projects and some of the ways we help councils save money, resources and improve places: https://www.dgcities.com/behavioural-innovation

Welcome Lara!

We’re starting the year by welcoming in the new with an introduction to our most recent member of the team, Behavioural Scientist, Lara Suraci!

Lara has a PhD in Behavioural Economics with a research focus on barriers to the adoption of novel, digital technologies - a key theme of many of our projects, from understanding attitudes to new mobility solutions to driving uptake of energy saving devices. Already, she’s been researching, designing, punning and embracing the unexpected, as she explains…

 

A few hours into my first day at DG Cities, a member of the team approached me with a piece of advice that went something like this, “You’ll soon realise two things about DG Cities: no two team members have the same background, and figuring out who’s an expert in what is going to take a while – but one thing you can be sure of is that every single person here really cares about their work.”

I’m now about two months in, and can confirm that this is true on both counts. I’m thrilled to be joining a team that covers so many different areas of expertise (even if I am indeed still wrapping my head around who does what) and in which the passion people have for their work is palpable.

I joined DG Cities as a Behavioural Scientist not too long after I finished my PhD in Behavioural Economics. However, one reason for why I feel so at home in a team with a strong focus on interdisciplinarity is that my own journey into the field of Behavioural Science was far from straightforward: in fact, I still remember sitting in my dorm room in Yokohama, despairing over what to do after finishing my degree in Modern East Asian Studies when I stumbled across Behavioural Economics – a discipline that combined my background in economics with my passion for psychology and human behaviour.

Ten years and three degrees later, I was still just as passionate about Behavioural Science as I had been on that first day in Yokohama… but I was equally passionate about the decision that a career in academia is definitely not for me!

DG Cities stood out to me because of their holistic approach to problem-solving and their commitment to creativity and to continuous improvement – in other words, going beyond the mindset of ‘this is how things have always been done’ and towards a ‘how can we build on what we know and make this even better’ way of thinking.

In addition to that, their focus on amplifying the benefits of innovation while mitigating its risks is well-aligned with my past research into behavioural reasons for why so many people seem reluctant to delegate decisions to AI and other novel technologies, often despite recognising their significant advantages.

So far, I have spent most of my time working on a project about Connected and Automated Vehicles that was, luckily, just familiar enough for me to jump in with both feet – and just different enough to challenge me and teach me new skills. Given my academic background, I am very mindful of the fact that I still have a lot to learn about applying my expertise to real-world behavioural issues but I am fortunate to be surrounded by a team that is both kind and extremely capable.

Looking ahead, I am beyond excited for this upcoming year and the projects to come – after all, I did once spend half an afternoon researching Christmas tree puns, which was certainly not on my bingo card for my first month in a new job… so if there’s one thing I am certain of is that DG Cities is bound to surprise me!

Find out more about our Behavioural Innovation practice here.

Lost in translation? How language defines trust in AI tools

Last month, the Department for Science, Innovation and Technology released the report Assuring a Responsible Future for AI. This important piece of work highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton looks at the importance of choosing the right words for assurance, and takes an in-depth look at some of the trust issues our research discovered.

Humans are hard-wired to consider trust, whether we’re buying a new car, meeting a stranger, or even understanding when to cross the street. We’re constantly assessing the world around us and deciding whether or not, through a decision we or someone else makes, we’re likely to benefit or come to harm. The problem is that humans aren’t always great at knowing what should and shouldn’t be trusted.

Trust, or more specifically trustworthiness, is a central element in the field of AI acceptance.

Trustworthiness, defined as displaying the characteristics that demonstrate you can be trusted, is gold dust to those looking to make use of AI in their tools and services. Tech designers go out of their way to make sure you trust their tools, because without trust, you’re very unlikely to come back. UX designers might choose voices that convey warmth, or use colloquialisms and local language to help ease interactions and build rapport. In text-based interactions too there’s a need for trust – some tools might use emojis to appear more authentic or friendly, others might seek to reassure you by providing references for the answer it generated. These are all methods to help you trust what AI is doing.

The issue, however, is that trust in AI is currently being fostered by the tools seeking your engagement. This obvious conflict here means that people using AI, whether employees or consumers, may be placing their trust in a risky product or tool – and in an emerging market that is evolving at pace, that creates real risk.

Understanding the risk AI presents, and the language used by business to assure products, was the topic of our most recent study for government. The DG Cities team undertook research for the Responsible Technology Adoption Unit and Department for Science, Innovation and Technology exploring AI trust from the perspective of those buying and using new products in the field today to understand what AI assurance means to them – and to understand their needs in order to assure new tools coming to the market. Our approach explored how AI tools are currently understood, and key to people’s understanding was the concept of fairness.

Understanding fairness of AI tools

For AI tools to be used safely, there’s a need to ensure their training is based on real world data that represents the reality in which the tool is likely to operate, but which also protects it from making decisions that are biased or limit outcomes. We found an example of the reality of “good bias” vs “bad bias” when exploring the use of AI in recruitment technology – here, bias from both objective and subjective measures is considered to drive a hiring decision – but for those using the tool, there is a need to ensure there is no bias related to protected characteristics. This challenge is an area where fairness comes to the fore:

“Fairness is the key one. And that intersects with unwanted bias. And the reason I try and say ‘unwanted bias’ is that you naturally need some (bias). Any AI tool or any kind of decision making tool needs some kind of bias, otherwise it doesn't produce anything. And so, I think front and centre is how does it work, does it work in the same way for all users?”

- private sector procurer

You can imagine a similar scenario playing out in a local authority setting in which resident information is used to asses housing allocation, or drive retrofit and improvement works to social housing stock. Here, bias must be understood to ensure the tool is delivering value to all groups – but with the introduction of certain criteria, an equitable approach may be created, whereby certain characteristics (e.g. low income, disabilities) are weighted differently. Fairness here is critical – and is a major reason why assurance processes, including bias assessments, and impact evaluations, are key practices for local authorities to build their capabilities in.

Making AI assurance more accessible

UK public sector bodies and businesses of all sizes are going to need to ensure the AI tools they are using are fit for purpose – without steps in place to make the checks needed, there is real risk of AI being used incorrectly and potentially creating harm.

Defining terms is important for several reasons, not least because without clarity and consistency, it is likely that those involved in the development, implementation, and regulation of AI technologies may find themselves speaking at cross purposes. Clear terms, used in agreed ways, help prevent misunderstandings and misinterpretations that could lead to errors or inefficiencies.

Well-defined terminology is also crucial for establishing ethical guidelines and legal standards. It allows policymakers to create regulations that address specific aspects of AI, such as privacy, bias, and accountability, ensuring that AI technologies are developed and used responsibly. Terminology related to AI assurance practice must convey requirements for legal standards, but as we’ve found from our engagement with industry for DSIT, this issue of terminology prevents business of all sizes from understanding what they need.

Is the language of AI assurance clear? I don't know whether it's the language per se, I think there's probably a lack of vocabulary… to me it's a question of ‘what are you assuring? What are you trying to show that you've achieved?’ And that all stems from: ‘what does the public want from the technology, what do they not want, what do regulators expect to see, how much evidence is enough evidence?”

- private sector procurer

Assurance language that is clear and well understood is also a pillar of effective risk management.

By precisely defining terms like "bias," "transparency," and "explainability," businesses and their stakeholders are far more likely to understand potential risks and take action to limit their potential impact. Shared meaning between leaders, teams, suppliers and clients is important if issues with AI are to be tackled in an appropriate way.

Finally, and perhaps most importantly, without clear AI assurance terminology, it’s unlikely that AI technologies are to be widely accepted and trusted. Assurance is one of the key mechanisms through which public bodies and businesses convey the trustworthiness of AI to the public. This is where clear terminology can be most powerful – it helps to demystify complex concepts, making AI more accessible to non-experts and increasing public trust. It’s also important in demonstrating the trustworthiness of brands – not only private sector businesses, but also local government.

Being a trusted source of information

As our research highlights, there’s a lot to be done in business and public sector to share and learn about AI tools and services in reality. At DG Cities, this is the kind of role we’re playing with authorities today to make sense of a complex and changing field. If you’re keen to learn more about what AI tools are in the field, and the types of assurance steps you should take to make better decisions on AI, get in touch.


Read the full report, Assuring a Responsible Future for AI.


Welcome Gabriela!

As is customary, we invite our newest team members to say hello and share an introduction to their work at DG Cities, some of their previous experience and how they are finding their first few weeks. We’re delighted to welcome our new Graduate Evaluation & Data Analyst, Gabriela Mihaylova

After just over a month as a Graduate Evaluation and Data Analyst at DG Cities, I can already say that this is a dynamic environment where the missions and projects are as inspiring as the team behind them.

With a background in BA Geography with Quantitative Methods, I value sustainable urban development and the innovative solutions that can have a positive impact people and businesses, and help them achieve their potential. The team at DG Cities aims to do exactly that by putting people first, with a collaborative approach and by exploring the capabilities of data and technology to create better cities for everyone.

My first few weeks were packed with a variety of engagements, some expected - think introductory meetings and admin - and some very exciting, - like joining an NHS-led session about local connection, meeting the Royal Borough of Greenwich council leader Anthony Okereke, and even getting involved in a rebranding exercise. These experiences showed me the behind the scenes of where and how we bring value to the public sector, organisations and people - and built my excitement for what is to come. 

My academic background centres around GIS (Geographic Information System) mapping, data analysis and visualisation, with a focus on population health and accessibility to city services. My expertise in geocomputation was applied in a specialised quantitative dissertation, developing a framework to measure the effect of proximity to amenities on personal happiness, reliant on concepts like the 15-minute city. 

I’m experienced in tools like R, QGIS and Excel, and had the opportunity to do placements at QStep and Rail Safety and Standards Board (RSSB) where I enhanced my understanding of research applications and real-world data analysis.

I’m now excited to contribute to innovative projects here at DG Cities, by using evaluation-based methods including Theory of Change (ToC) development, data analysis and report writing. I am thrilled to be part of this interdisciplinary and passionate team aiming to address urgent environmental, social and economic challenges!




Understanding how we describe trustworthy, responsible and ethical AI

Less than half (44%) of UK businesses using AI are confident in their ability to demonstrate compliance with government regulations, according to a new report released by the Department for Science, Innovation and Technology. DG Cities contributed to this research, published under the Responsible Technology Adoption Unit, which highlights the challenge of businesses and public sector organisations adopting AI tools without sufficient safeguards in place. For our latest blog, our Research & Insights Director, Ed Houghton, who led the research, explains why the words we use to define emerging tech matter.

Max Gruber / Better Images of AI / Ceci n'est pas une banane / CC-BY 4.0

In a world already full of jargon and buzzwords comes AI to generate its own. Almost overnight (although those in the field will no doubt argue otherwise) business has had to run to keep up, as new terms, such as gen-AI have entered the lexicon. Of course, the day-to-day use of jargon might be irritating, but beneath it lies a critical challenge: within the AI space there is no clear language that people believe, understand and trust.

Nowhere in the AI field is language more important than in the space of AI assurance. Put simply, assurance is the practice of checking something does as it is designed and intended. For businesses using AI, assurance is critical in assessing and validating the way AI uses business or consumer data. In regulated industries like banking, AI assurance is becoming a key requirement of responsible practice.

At DG Cities, we were recently commissioned by DSIT to explore assurance language as part of the UK government’s push to create the UK’s AI assurance ecosystem. Our aim was to engage with UK industry to understand the barriers to using assurance language, and the importance of standardised terms to helping businesses communicate with their customers and stakeholders. We surveyed over 1,000 business leaders and interviewed 30 in greater depth to explore their views.

What we found gives an interesting picture of this emerging space. We found excitement and interest in making use of AI, but concerns over doing the right thing. For example, almost half (44%) didn't feel confident they were meeting assurance requirements from regulation. The reasons for this were numerous, but consistent themes were: lack of clear terms, and lack of UK and international standards.

We also spoke to the public sector about assuring AI when working on public services, including in local government. Here similar issues came up: lack of knowledge in how to assure AI, and terms that were inconsistent. We believe this a barrier to the safe adoption of AI in sectors where it could have major value.

It's great to see our work for DSIT now shared. We think this is a massive opportunity for the UK to lead globally, to create AI assurance businesses and tools that are designed to ensure AI remains safe and trustworthy, and that ensure the public is always protected when AI is used.


If you’re interested in finding out more about our work in AI, you can read about how we help local authorities navigate the challenges of ethical and effective use of new tools here, browse our reports here, or get in touch

Have you factored independent evaluation into your retrofit funding bid?

As councils and organisations get ready to apply for Wave 3 of the Warm Homes: Social Housing Decarbonisation Fund, it’s useful to examine the critical role of behaviour in both intervention success and M&E design. Our research for the Department for Energy Security & Net Zero has shown that effective retrofitting goes beyond physical upgrades. It requires understanding of the behavioural and socio-economic factors that influence residents’ engagement and satisfaction.

For the first of two blogs, our Behavioural Economist, Leanne Kelly shares her tips to improve retrofit outcomes; by gathering household insights early, tailoring engagement strategies, and designing projects with co-benefits in mind. This kind of robust, behaviour-informed M&E is key to better outcomes, and scaling retrofit efforts much more efficiently across social housing.

As an innovation company owned by a local authority, and with the trials, engagement and monitoring and evaluation (M&E) work we do in communities, we understand the place-based, practical, and behavioural elements to schemes like the Warm Homes: Social Housing Fund, which is open for Wave 3 applications.

Our Complex-to-Decarbonise (CTD) work with UCL for the Department for Energy Security & Net Zero, for example, helped surface and evidence challenges and solutions for retrofit work. It gave a holistic picture of the complex challenge. The output of the work was an identification framework that integrated the physical, locational, occupant demographic, behavioural, and system-level attributes.

The Warm Homes fund has been an important vehicle for social housing retrofit, and laying critical foundations for energy system change – it has also provided the opportunity to demonstrate success and value to the wider sector and to private housing. There are of course challenges to its implementation, which M&E should capture, to reflect back lessons and best practices – and M&E should itself be designed to overcome challenges.

Here, I want to focus on the role of behaviour – in retrofit work, and in M&E design and delivery more generally – and share some of DG Cities’ tips to improve intervention delivery and evaluation in this space.

Behaviour matters

Behavioural attitudes, intentions and changes are critical to decarbonisation at scale. In terms of how aware and informed people and organisations are, how able and motivated they are to participate and respond to interventions, and how lived outcomes change. These outcomes often include subjective wellbeing considerations, like financial stress, place and housing satisfaction.

Behavioural attributes should be understood across a household’s whole user journey of retrofit: the design process, engagement and buy-in, work delivery, and post-work use and maintenance. These stages often require significant care and time and/or cost, whilst the decanting of residents for work and the disruption to their daily lives are critical factors to retrofit uptake and effectiveness. Therefore, understanding and shaping interventions through this user journey and project cycle can help to reduce drop-off, delay and disappointment.

Behaviour is only mentioned once in the DESNZ M&E Framework, with just a few mentions of satisfaction (2) and attitudes (2), with no inclusion of the term wellbeing. Our CTD work also found there were limited datasets for considering socio-economic barriers, impacts, distributional aspects beyond household characteristics and income data, and limited evidence on social and behavioural barriers. Nevertheless, our CTD research raised the need to include social, economic and behavioural attributes as they exacerbate the complexity and challenges to retrofit homes. Our interviews and case studies identified many useful examples.

“Many people don’t understand what it means to them, other people understand it as a cost, other people understand it as a comfort, so it needs a very different communication tool that you need to use to understand the urgency to improve their building... to use different tools depending on the group of people that you need to work with.” (Interviewee)

Low willingness to have one’s own home retrofitted needs to be recognised as a barrier, which has wider elements, both intrinsic (attitudes, knowledge, ability, disruption concern) and extrinsic (incentives, benefits framing) motivation. Ability, or perceived ability, matters too. Vulnerable households, those with health-related issues or potential push-back may or may not be initially known, but they can be identified (other services may know these householders better), empathised with (is home safe, familiar, under their control?), and planned with (why those times or that approach may not work with your family).

Councils can spend a great deal of time and money trying to reach, engage, inform, engage again, and understand a wide range of residents on decarbonisation, and there is a risk that some of these efforts don’t keep households in the programme or provide valuable final outcomes. This has ramifications for further council decarbonisation and place-based ambitions for that neighbourhood. It also matters in understanding and delivering a just transition, with any households being left behind.

Further, there may have been missed opportunities to utilise the retrofit and its engagement to meet other needs of households – opportunities to collaboratively share wider information or invite residents to local health, community or service activities/events - or to support the development of more neighbourhood connection and cohesion – a chance for people to interact positively with their neighbours.

Trying to mitigate risks has been reflected in some of our tips below for enhanced outputs and outcomes. For example, there is quite a gap between the basic M&E KPIs of Number of tenants engaged and signed up to works and Number of properties completed and various risks. These reflect some of what we have learnt through our monitoring and evaluation work.

DG Cities’ top three tips to aid better outcomes through design and delivery:

  1. Undertake housing and household information gathering and profiles earlier on, identifying where ability or willingness for programme inclusion may be low and interaction more complex.

    The CTD identification framework can be followed to consider a range of attributes, including physical and behavioural barriers and opportunities, recognising that varying levels of challenges exist across a stock of housing rather than the challenging and non-challenging ones. A range of methods can be used here.

    As well as required in-house surveys, integrating wider service teams’ knowledge and behavioural frameworks like COM-B can be really useful. Build in understanding of resident attitudes, home behaviours and motivations to design and deliver the work, and tailor or disaggregate approaches as needed.

  2. Tailor the outreach and engagement design in response to these barriers and enablers.

    A range of routes and methods could be used, considering current communication and community channels, trusted local messengers, and collaborating with more embedded service teams.

  3. Design with co-benefits – there may be clear ways for co-benefits to be delivered via the retrofit and energy works, such as street quality, home comfort and others that matter for the specific residents.

    Creating a sense of shared neighbourhood aims and social connection and an individual sense of agency (having areas of choice, even if small, within the programme) have been found to work elsewhere. These may need to be better framed, explored with and presented to residents.

    There may also be an opportunity or need to create a more beneficial offer, raising interest and motivation – could the retrofit journey be combined with other service delivery? Could residents jointly be informed on and access retrofit and other activities? Could the group of residents be brought together earlier, developing a sense of connection and familiarity before the improvement work?

    Here, we have been exploring the concept of local activity matching in neighbourhoods as an efficient delivery model.

Of course, such approaches themselves need to be tested. M&E has a critical role in enabling design and delivery teams to learn what works. There is an important role for pilots here – trying, for example the profiling, tailoring and co-benefits designs above in relation to a wider cohort to assess if they worked better – and, if so, where. Doing so now, and continuing to learn with the monitoring of any different approaches and innovations, can help councils take forward the future scale of retrofit and heating works more efficiently. This is something our DG Cities team love to help with.


Stay tuned for part two of Leanne’s blog, which looks at how to design an M&E approach in this context, with some useful tips. You can learn more about our evaluation practice, our experts and read our introductory whitepapers here, or get in touch to discuss how this strand of our work can support your decarbonisation and funding aims.

How can councils meet their housing decarbonisation aims

Every country is working to mitigate the impacts of climate change. While the global direction is guided by COP summits and diplomacy, and national policy might set the budget and priorities, it is down to local government to deliver on targets, whether that’s upgrading housing stock or rolling out EV charging infrastructure. Great work is being done at this local level, but councils face significant barriers to working at pace and scale to realise some of their ambitions. Here, we’re taking a look at some of the main issues, and possible strategies to address them, drawing insights from some of our research and practical experience supporting local authorities in their decarbonisation efforts.

Can we afford it?

The first and most obvious barrier to any initiative is cost. With enormous pressure on budgets, local authorities can face impossible decisions between cuts or investment in different services. In many cases, the capacity to invest in new infrastructure simply isn’t there, despite the recognition that in the long-term, financing renewable tech will deliver benefits. Another issue in terms of financing is the absence of a structured approach to funding at the right scale to tackle decarbonisation.

A 2021 report by the National Audit Office highlighted the extent to which funding shortages were identified as a barrier to achieving carbon reduction targets: “17 local authority areas received £20 million or more each, while 37 received less than £2 million each.” While the situation has evolved since, it is still the case that some councils have been more successful in grant funding than others. Much of the funding is allocated through competition, which naturally favours councils with existing resources. Matched funding is required, and delivery timescales are linked to government budget timelines, rather than what is actually feasible on the ground, and often reward caution rather than ambition and innovation.

Do we have the expertise?

Decarbonisation initiatives can require specialist knowledge and expertise, which may be limited within some councils. Within existing teams, with responsibilities and budgets stretched, the lack of available capacity to plan, implement, and monitor decarbonisation projects can also hinder progress.

“In some areas, officers might have to be placed in jobs that don’t match their expertise because that’s where the funding is now allocated – there’s often a skills challenge that councils have to address, whether through hiring, training or reallocation of resources.”

- Balazs Csuvar, Director of Innovation & Net Zero, DG Cities

How can we predict and invest on the basis of future policy?

Shifting regulatory frameworks and national policies can create uncertainty for councils, making it difficult to develop coherent decarbonisation strategies. Ambiguity surrounding government incentives and targets may have deterred councils from committing to long-term sustainability goals pre-election. Following July’s result, as the new government establishes itself, it’s natural to be cautious of investing in areas where there may be significant policy change.

What if there is resistance locally?

Decarbonisation initiatives can face resistance from various stakeholders, not least residents, businesses, and local interest groups. Concerns about cost implications, disruption, change to the appearance of a place, as well as any perceived inconveniences may hinder community support for sustainability measures. Some policies can be particularly divisive, such as LTNs and restrictions around parking.


How do we start to break down some of these barriers to meeting national and local targets?

First, by tackling the financial argument and helping councils identify, meet the criteria and apply for funding.

There are grants, private partnerships, and sustainable finance mechanisms to support decarbonisation efforts. Underpinning all these investment models is the principle that prioritising low-carbon infrastructure and energy-efficient technologies can yield long-term cost savings and environmental benefits. Our data-led approach can often help councils evidence this.

The government’s Warm Homes Social Housing Funding is an important source of financing for housing retrofit. The third and latest wave opened for applications at the end of September and will close on 25 November. A key step in submitting is identifying priorities, and DG Cities has developed a tool to support this – find out more about our ‘home-by-home’ plan here.

Second, by upskilling and building capacity within councils.

There’s clearly a need for investment in training and knowledge-sharing initiatives to build internal capacity for decarbonisation, potentially in collaboration with academic institutions, industry experts, and peer councils. This is a long-term priority. However, we understand the realities of council budgets and know that this isn’t always feasible – that’s why we exist. DG Cities was set up as an independent company by the Royal Borough of Greenwich to advance innovation in the area, but also to act as a strategic innovation partner for other councils to benefit from this expertise and experience of Greenwich as a testbed.

Third, we need continuity and stability in policy-making from government.

The public sector needs supportive regulatory frameworks that incentivise decarbonisation. Proactive participation in policy consultations and lobbying efforts can influence national decision-making processes and ensure alignment with local priorities. Projects must be coordinated beyond a local level – as our government-funded work to support the rollout of electric vehicle charging in rural areas showed, there is no use putting infrastructure where there is no grid capacity to support it. This sentiment was echoed in the LGA’s report, Green heat: Achieving heat and buildings decarbonisation by 2050, which highlighted the gap between national policy and local delivery for heating, as currently, there is “no mechanism and limited ability for councils to influence or shape investments in developing the electricity grid infrastructure in line with local plans for decarbonising heat.”

Finally, bring the public into the process.

This is vital. Effective communication and engagement are essential in building support for even the most contentious decarbonisation initiatives – ideally, turning apprehension into advocacy. Councils need to ensure transparent and inclusive approaches, involving residents and businesses in decision-making and addressing any concerns through meaningful dialogue and education. Our work in public engagement around new technologies has demonstrated the value of engagement in building trust and shifting attitudes – and meaningful is the key here, as the process must be open, inclusive and impactful, and not guided by pre-determined outcomes.

If the UK is to achieve its decarbonisation targets, national and local government must work together and in partnership with communities. Where internal capacity and skills are an issue, councils should look to bring in staff with relevant expertise and knowledge, or selectively look to external consultancies for support. We say selectively, as the aim should be to create in-house expertise and build capacity. By identifying and addressing barriers such as financial constraints, lack of capacity and expertise, regulatory uncertainty, and stakeholder resistance, local councils can drive the transition towards a more sustainable, equitable and resilient future.


Read more about our home-by-home plan and some of our work delivering council electrification strategies - and get in touch, we’d be happy to discuss our experience working with local authorities on strategies to meet decarbonisation targets across housing and transport.

Increasing cycling rates: from the Dutch seaside to Stevenage, the value of examining what works in practice

Our Head of Communications, Sarah Simpkin spent her holidays enjoying the impressive cycling infrastructure of the Netherlands. For our latest blog, she takes the opportunity to talk about it endlessly… sorry, to take a look at how Dutch best practice is being shared with other cities, the value of working with specific groups to encourage uptake, evaluating what works (or not) and why, and some great feedback from DG Cities recent work in Stevenage.

Knoopunt ‘point to point’ cycle route sign in Haarlem, the Netherlands

One of the first things I did when I got back from our summer holiday was read the Dutch Cycling Embassy’s best practice guidelines. Not a sentence I might have expected to write.

We had been cycling around the Netherlands. Everyone knows the country is a leader when it comes to cycling’s modal share (in some areas, more than half of all journeys are made by bike). Still, we were so taken by the comfort of universal cycle lanes, the network of signposted ‘fietsknooppunten’ (point-to-point number sequences that our 9-year-old was able to remember and direct us between), the easy connections between towns and cities that didn’t involve the mortal danger of joining an A-road or our national speed limit on narrow country lanes; then the underground bike parks, drop kerbs, and oh my, those magical Dutch roundabouts... Equally wonderful was seeing the range of people (and pets) using them, from weekend racers to cargo carriers, and the freedom designing for wheels also gave mobility scooter users and parents with pushchairs. 

The Dutch Cycling Embassy’s best practice guide is a great example of a document of its type: clear, engaging and well sourced, with so many potential lessons for spending the money the UK government has promised to invest in safe infrastructure for active travel. It demonstrates the usefulness of knowledge sharing initiatives, like Sharing Cities, which London and Greenwich, with DG Cities, were part of – an EU-funded platform for international collaboration to help commercialise, advance and deliver new smart city solutions. A chance to learn from Milan’s expertise in retrofit, for instance, or the roll out of e-bikes on the hills of Lisbon.

Cycling in London

It would be easy to come back from the Netherlands to anywhere feeling as deflated a flat tyre on a touring bike, but it’s important to remember that they are decades ahead in terms of policy and investment. While London hasn’t gone fully Dutch quite yet, things have improved significantly since I started tentatively commuting along Brixton’s bus lanes more than twenty years ago.

For a start, there are many more people out on bikes – 24% of Londoners say they have ridden a bike in the past year – supported by positive changes implemented by some councils, as well as major improvements driven by walking and cycling commissioner Will Norman and his team. We have more segregated routes, more choice, there are many more cyclists, all wonderful to see, but we’re still far from normalising the bike as a viable alternative for everyone – and clearly it has as much to do with culture and perception as infrastructure.

It's also about having access to a bike. It’s a catch-22 that many people who might be open to taking more journeys by bike won’t do so without safe infrastructure, but safe infrastructure won’t always be prioritised unless more people demand it. How might policy induce demand? In London, this kind of stimulation is helped by hire cycles and e-bike services – Lime bikes’ survey claims their usage has increased by more than 10% per month, so demand is increasing. Still, hiring any bike can be expensive for some longer journeys. When it comes to ownership, there are Ride to Work schemes and great initiatives aimed at specific groups, such as bike donation schemes for refugees, but I’m interested to see how else can we address affordability – to give people the means as well as the routes.

Understanding what works

Alongside innovation in urban design, policy and public engagement, there’s a need to commit to diagnosis and evaluation; to look at what works in terms of space, facilities and behaviours, and be prepared to adjust for change where necessary. Earlier this year, DG Cities was commissioned by Hertfordshire County Council and Stevenage Borough Council to do just that with a project looking at cycling uptake in the town.

Ed, Emily and Leanne of DG Cities in Stevenage

Stevenage, as a new town, is built to accommodate cycling and they have invested in paths and infrastructure, but rates remain stubbornly low, particularly across some demographics. This focus on increasing uptake among particular groups was one of the most interesting and useful aspects of the project – its focus was families with children under 18, staff at the Lister Hospital, and students attending North Hertfordshire College.

This kind of targeted strategy has benefits when it comes to supporting any behaviour change initiative, as Ed, our Director of Insights, who led the project explains:

“The right way to do behaviour change is to focus on cohorts, rather than any generic approach. This makes it much more effective. Here, we were able to look at behaviours within the underrepresented groups where there is an opportunity to make a difference.”

Following an intensive phase of evidence analysis and discussion, the team spent time out and about in Stevenage, talking to all kinds of road users, hospital staff, students and residents, to understand the unique circumstances of the place and people’s barriers to cycling. From this came a co-design process, to explore potential solutions and put together a practical intervention plan, which included some great ideas to trial that respond directly to people’s concerns. Bicycles libraries for people to hire rather than buy, for example, an inventive ‘cycle miles’ scheme, refresher opportunities for those at different life stages, and measures to support and grow local networks and create events centred around cycling. Great ideas that came through a more iterative process of continual improvement, and really highlighted the importance of evaluation in any programme with behaviour change as its aim.

And it’s not just us saying that. The DG Cities team was delighted to receive some very positive feedback from our client, who commented on how Ed, Leanne and Emily “demonstrated a thorough understanding of our objectives and tailored their approach to meet our specific needs. They employed rigorous behavioural science methodologies and delivered comprehensive insights that have been invaluable to our understanding of barriers and facilitators to cycling behaviours in the target area and target demographics.” They also found the final report clear, useful and told us it “provided actionable recommendations that we are confident will drive positive change to our cycling offer in Hertfordshire.” 

Part of the ease and enjoyment of cycling in the Netherlands is the lack of friction between different road users. To work towards this, it’s vital to fully understand people’s concerns and work collaboratively to find solutions – not everyone can or would want to hop on a bike, and not all vulnerabilities are obvious. So it’s great to see a project like this give the DG Cities team the chance to examine the data, talk to people and look at what really works, why and for whom – and bringing fresh ideas to the challenge of increasing cycling and walking rates in different areas.

 

To find out more about our evaluation service, read our introduction to assessing impact or get in touch!