Technological innovation with human values

How do we ensure innovations in transport, for example, or public services, are not only easy to use but also meet real human needs? Can they reflect fundamental societal principles like safety, fairness, and community? Following up on some great discussions at Tech Week in London last week, our Behavioural Scientist, Emily King explores the science behind value-led development at a local as well as a global scale, and how an understanding of these drivers can ensure that innovations like self-driving cars are responsibly designed and deployed.

Ethical Roads workshop at SMLL

One of the last bills to make it through parliament before the election was the UK’s Automated Vehicles Bill – a world-first piece of legislation designed to ensure AI innovations on our roads are safe and deployed responsibly by industry. The AV Act has established the legal framework, but for self-driving to be accepted, legal foundations aren’t enough. New AI-based technologies need sound ethical foundations too.

At DG Cities, we spend a lot of time thinking about how to develop technologies that work for individuals and communities. We use principles and approaches from the fields of human-centred design and behavioural science to understand how to develop and deploy technologies that meet real human needs.

In this respect, self-driving is an interesting area of innovation, as it is one challenging industry to put people first. Our work often centres on the concepts of trust and acceptance of technology in different forms. Our ongoing DeepSafe work, for example, with a commercial and academic consortium in the self-driving industry, seeks to better understand the factors driving acceptance of self-driving vehicles, and what is important to build trust in them.

The technology acceptance model highlights two important factors that drive acceptance, commodity and ease:

  • Is it useful? Does the technology help to meet specific needs?

  • Is it easy to use?  

Human-centred design focuses largely on the second of these factors – how easy or attractive they are to use – by developing technologies which take as their starting point the user experience. However, what seems to be less at the heart of discussions around human-centred design of technological innovations is their actual usefulness ­– how much they will meet real human needs, and particularly how they align with broader societal values.  

How do we start to bring values into the design of self-driving services?

One way to make the process of ensuring acceptance of technological innovations more seamless would be for those working in technological innovation to root the process in societal values. The human-centred design process begins with empathy for the potential user of a productthis should include an empathetic understanding of what users value the most.

But first, how to define values – essentially, they are our internal standards of what is important. Our values inform our attitudes, beliefs and behaviours. Whilst individuals hold different values, cross-cultural analysis[1] suggests that some types of values are consistent across most individuals and societies.

According to this research, the most strongly held values worldwide include:

  • Benevolence: ‘preserving and enhancing the welfare of those with whom one is in frequent personal contact’

  • Universalism: ‘understanding, appreciation, tolerance, and protection for the welfare of all people and for nature’

  • Self-direction: ‘independent thought and action-choosing, creating, exploring’.

Technological innovations may align with some widely-held values more than others. For example, self-driving vehicles are a solution to improving societal needs such as improved safety on the roads, and increased ease of travel by reducing congestion. These benefits largely come from the greater connectedness of vehicles providing additional information to enable safer driving decisions.

However, the autonomous element of the vehicles also threatens ‘human welfare’, for example by reducing the job security of bus and taxi drivers, or reducing connectedness and community by removing any opportunity for human interaction between passengers taking a taxi journey. Thus, this innovation is not fully aligned with the core values of benevolence and universalism.

Our Ethical Roads project, delivered in collaboration with Reed Mobility, identified several ‘ethical red lines’ for self-driving vehicles, which align with the values of benevolence and universalism, such as ensuring that vehicles improve road safety and that all road users are protected equally. This highlights how values underpin requirements for technologies to be accepted.

For technological innovations to be truly human-centred, it is crucial to develop a coherent sense of which values are most important to communities, and use these as a basis for innovation, to ensure that technologies reflect the true needs and values of society.

What could this look like in practice?

At DG Cities, we look at technological innovation at a range of different scales, from very local issues facing a particular community (e.g. the best method for using sensors to reduce damp and mould on specific estates) through to issues at a national or global scale (e.g. AI assurance).  

On a local community scale: values-centred design could involve identifying the specific priority needs and values communities hold before embarking on a project or introducing a new technological innovation. Research into attitudes and priorities is important here – what is it that matters most to people, and what innovations might be possible to truly improve their lives?

Innovation should also be based on the values of a specific community. Measures such as the Schwartz Value Survey or the Portrait Values Questionnaire could be used in research instruments to identify which values are of greatest importance to individuals and communities, and technological innovations should be aligned with these.

Starting a project with a problem or goal which has been identified or defined by communities helps to bring a sense of ownership to new innovations, and involves communities throughout the whole process, rather than seeking feedback on a pre-determined idea.

At a global level, technological innovation that is truly human-centred should be aligned with the values of the global majority. This means that innovations in AI should not only reflect the values of demographics like tech bros or white wealthy westerners, but those from around the world. According to Schwarz, this means ensuring innovations improve or at the very least do not reduce the overall welfare of the global population and nature; and that they enhance rather than undermine independent thought and creativity.

It is important for innovation to begin with research about people, communities, and their values. For innovations in AI which have a global reach and impact, there is a need for behavioural and design research to ensure innovations reflect the priorities of the rest of the world.

Meanwhile, local organisations should focus on establishing the values and priorities of local communities as a method for identifying where to innovate. Methodologies such as citizens assemblies or deliberative dialogue research, which asks communities across the globe to design their ideal futures, could be vital in taking the next step toward technological design centred on human values.

If you’d like to learn more about our behavioural innovation approach, you can read more here - or get in touch!



[1] Schwartz (2012)