What makes a self-driving vehicle feel safe? Understanding the nuances of attitudes to AI on the road

In a world where self-driven vehicles share the roads with vehicles driven by people, how do we define what is ‘safe’? And is it likely to be so different from the way we currently travel on the UK’s roads, one of the riskiest activities many of us do on a daily basis? DG Cities has been investigating the meaning of safety in the AI-driven future as part of project D-RISK. Head of Research, Ed Houghton shares some of our latest findings and analysis.

Image of man in glasses driving car. Rear view, shows motorway traffic ahead.

Photo: Dan Gold/Unsplash

Often the topic of science-fiction, self-driving has come a long way in a relatively short period. Many of the technologies we are using in cars today have some level of automation, such as automated braking systems, and forthcoming automated lane keeping systems (ALKS), both of which make some use of self-driving tech. But how can we know if these technologies are safe? And is it useful to compare their statistics with human driven vehicles? After all, not everyone on the road is safe.

This question of what constitutes safe is important. In order to advance safe self-driving vehicle technology, we need to develop a deeper understanding of how we define, measure, and experience safety on the roads. How we perceive safety will differ significantly from person to person. For example, an elderly person crossing a busy pelican crossing may feel it is unsafe, whilst a young-adult on an e-scooter may feel overly safe. Drivers also experience safety differently, as do their passengers, therefore it’s important we consider the individual nuances of what safety can mean.

D-Risk is a recently completed £3m programme led by drisk.ai alongside DG Cities, Claytex and Imperial College London – as a collective, we have been working towards building a driving test for self-driving technologies. We believe this is a vital piece of the puzzle towards building safer urban environments. But to do this, we needed to go back to basics to redefine what we mean by safe self-driving vehicles – and we did this by surveying the public.

We surveyed 651 members of the public, and ran six workshops across the UK to explore public attitudes to autonomous vehicle safety.

One major factor that influences our perception of safety is the environment we’re in, especially if it is unfamiliar. We asked survey respondents to describe their willingness to ride in a self-driving car in urban environment, compared to a rural one, at different times of day. We found that less than a fifth (17.6%) believe travelling in a self-driving vehicle in an urban environment, or in a rural environment (15.5%), at night would be safe. Daytime travel was rated slightly safer (urban: 24.7%, rural: 22.1%).

We also investigated views on new partial autonomy systems that take over specific tasks from the driver. Our data showed that ALKS (Automated Lane Keeping Systems) are viewed with some scepticism by the public, with only a quarter (25.2%) looking to use them in the future. Almost three fifths (59.3%) of those we surveyed would not use ALKS technologies if they were made available to them. Less than half (48.7%) do not believe that ALKS will improve road safety, whilst almost a quarter (24.6%) are yet to be convinced. 

As for what builds trust, we looked into assurance processes, such as annual software MOTs and independent software audits. Both were viewed positively by the public: there was broad agreement that the assurance processes outlined to participants would have positive impacts on perceptions of trust. The highest rated impacts were annual software MOTs (49.8% believed this would have a positive impact) and independent software audits (48.4%), illustrating the importance of assurance processes to the public.

Where to next for safe autonomy?

We found great interest in autonomy as a route to safer roads, but many we spoke to still felt there was not enough information or examples available to help overcome their concerns. This, we believe, is a vital step for those looking to deploy self-driving services – and we believe that self-driving tests, like the ones developed through D-RISK, have the potential to radically shift how people view, and trust, AI.

Read our latest report in full.