Ensuring technology benefits humanity

Introducing the University's new Institute for Technology and Humanity

A major interdisciplinary initiative aims to meet the challenges and opportunities of new technologies as they emerge, today and far into the future.

The frenetic pace of technology in the 21st century is a double-edged sword.

Emerging technologies could increase our prosperity and longevity, and help eradicate disease. However, they also hold the potential for conflict, democratic breakdown and climate crises.

This is the “lesson of history”, according to the director of a new Cambridge institute set up to help ensure new technologies are harnessed for the good of humankind.

“Previous waves of technology helped us thrive as a species, with higher incomes and more people alive than ever before,” says Dr Stephen Cave. "But those waves also had huge costs."

"The last industrial revolution, for example, fuelled the rise of communism and fascism, colonial expansion and the greenhouse gases that now threaten the biosphere."

Today, both the scale and speed of technological change is greater than ever before.

Large language models have had among the fastest uptake of any technology in history: 100 million active users of ChatGPT within sixty days of launching. Just months after the Covid outbreak, scientists were testing mRNA vaccines that now protect over five billion people.

The potential rewards of these technologies are immense, while the worst case scenarios should we fail could be existential.

Maximising the benefits while minimising the risks requires insights into history, society, politics, psychology and ethics as well as a deep understanding of the technologies themselves.

To meet this interdisciplinary challenge, the University has brought together three established Cambridge research centres under one banner: the new Institute for Technology and Humanity, launched today.

By integrating the Leverhulme Centre for the Future of Intelligence (CFI), the Centre for Human-Inspired AI (CHIA), and the Centre for the Study of Existential Risk (CSER), the new initiative will contain historians and philosophers as well as computer scientists and robotics experts.

“The new institute demonstrates that the University of Cambridge is rising to this challenge of ensuring that human technologies do not exceed and overwhelm human capacities and human needs,” said Prof Deborah Prentice, Cambridge's Vice-Chancellor.

Between the three Cambridge centres, the UK's first Masters Degree in the ethics of AI has already been launched, and researchers have advised international organisations on the governance of nuclear bombs and autonomous weapons.

Current work includes design toolkits for ethical AI, computer vision systems that could help self-driving cars spot hidden pedestrians, and research on the effect of volcanos on global communications systems.

The new Institute will see major research strands on lessons from Covid-19, the misuses of generative AI, and the development of emotion-enhanced AI.

There are also plans for scholarship initiatives and further Masters and PhD programmes on the ethics of technology as well as human-centred robotics.

Situated in the School of Arts and Humanities, the Institute is closely tied to other flagship programmes, such as the University-wide ai@cam initiative, which aims to support the development of AI for science and society.

The University has long been at the forefront of technological development, from IVF to the webcam – and also responses to it, from Bertrand Russell’s work on nuclear disarmament to Onora O’Neill’s contributions to bioethics.

This push and pull between the engineering of new technology and the ethics behind it is "how the future gets forged", says Cave. "We now have all this under one umbrella.”

From ethics to algorithms: Meet the new Institute's leadership team

Professor Anna Korhonen, Director of the Centre for Human-Inspired AI

Professor Anna Korhonen, Director of the Centre for Human-Inspired AI

Professor Anna Korhonen, Director of the Centre for Human-Inspired AI

Prof Anna Korhonen

“Too much artificial intelligence isn’t built with human users in mind. Why should we be adapting to the needs of AI, when AI should be adapting to our needs. That’s the way it should be, right?”

This is the crux of the work Professor Anna Korhonen and her colleagues conduct at the Centre for Human-Inspired AI (CHIA) – one of the three pillars of the new Institute. While the other wings focus on the societal implications of AI and other technologies, CHIA is dedicated to the technical development of AI itself.

“The new institute will bring those developing AI and those investigating its implications for humanity together. We need people trained in the humanities and social, cognitive, and other human-centred subjects informing the development of AI, which has primarily been the domain of STEM researchers."

CHIA innovates in a range of artificial intelligence, such as machine learning, natural language processing, computer vision, human-computer interaction and robotics. However, all this work puts humanity at the heart of the machine.

Korhonen points to work such as that of Aleesha Hamid, a CHIA PhD student developing AI for people with motor disabilities. Hamid’s cousin was brutally attacked five years ago, leaving her with a traumatic brain injury and severe physical disabilities.

Now Hamid is working on algorithms that refine “eye typing”, so those suffering with neurological disorders such as Cerebral Palsy, or who have become paralyzed after injury – as with Hamid’s cousin – can communicate more freely through gaze tracking technology.  

Other researchers at the Centre are developing AI to assist with news verification, medical diagnosis or creation of new forms of art. There’s also work on improving the safety of AI, and expanding its reach so that more of the world can gain from using it.

“Currently, technologies such as ChatGPT mainly exist for major languages such as English and Chinese, leaving most of the global population behind,” said Korhonen. “We are working on how AI can be scaled up to deal with at least the next one thousand languages.”

For Korhonen, these are all examples of how AI should enhance the human experience, rather than replace or negate it.

“We would like to see humans put at the centre of every stage of AI development – basic research, application, commercialisation and policymaking – to help ensure AI benefits everyone.”

Prof Anna Korhonen

One of the outputs from CHIA in its new home within the Institute will be will be a Master’s and PhD programmes aimed at providing the next generation of AI scientists a holistic education with access to not just technical expertise, but also the human, ethical and industrial aspects of AI.

“By working within the new institute, we are able to combine our understanding of the algorithms with the expertise in ethics from the other centres,” says Korhonen.

“If you want AI to be human inspired and apply beneficially to society, this kind of well-rounded education is essential, whether our people stay in research or go on into industry or other sectors of society where this technology will play a prominent role in the future.”

Dr Stephen Cave

Dr Stephen Cave thinks a lot about “desirable futures”, even though he definitely does not want to live past his own sell-by date. A philosopher by training, he is Director of the new Institute for Humanity and Technology, and believes we risk complacency when it comes to advances in technology.    

“It feels as if we’ve got used to the breakneck speed of current technological change, and yet there are many things we find it hard to imagine changing, such as social and political systems, and – in somewhere like the UK – relative peace and prosperity,” says Cave.

“But the lesson of history is that technological transformation can very easily provoke huge instability, from revolution to civil war. To avoid this, we need to take such possibilities seriously.

“Across the new institute we have historians, AI engineers and experts in social structures, but philosophy can bring these disciplines together to facilitate the conversations that help to steer us in more favourable and ethical directions. Interdisciplinary thinking is baked into every project we do.”

One of Cave’s many preoccupations around humanity’s uncertain future is the ramification of longer lifespans.

“Major investments are now going into life extension technology from some of the world’s wealthiest people. The consequences for society of extra decades of life would be immense: systems of education and career structures could collapse, not to mention the planetary toll.”

This month, Cave publishes a book taking the form of a debate with US philosopher John Martin Fischer on the question of eternal life. Cave is firmly against immortality, not just on the basis of societal breakdown, but also due to the potential for suffering endless ennui.       

More immediate concerns loom large at the Institute, however, with a number of its researchers feeding into policy discussion around threats posed by artificial intelligence and biological warfare, for example, at both a UK and European level.

“There’s a huge amount of justifiable excitement about the new wave of generative AI, and what it can do, but we need to be thinking in advance about worst-case scenarios in order to avoid them,” says Cave. “Initiatives such as the UK’s AI Summit and the EU’s AI Act are fantastic, but require a proper research base behind them.

“Regulation is vital, but we need to understand the evolution of AI's capacities – and its trajectories – to have a sense of how it might operate five or ten years from now. Otherwise we risk regulating redundant technology. This is where an institute such as ours comes into its own.”   

Disruptive new technologies are always double-edged swords, argues Cave, and there is nothing inevitable about either triumph or calamity. “Nuclear energy is a perfect example of a technology that could prove essential to solving the climate crisis. And at the same time, of course, it threatens to obliterate us.”

“Or the very recent example of social media, which changed the face of global connectivity, but has contributed to disinformation and even genocide.” 

“This new Institute has been set up in order to do the kind of research that will shepherd the coming technological revolutions away from disaster.”

Dr Stephen Cave

Dr Stephen Cave, Director of the Institute of Technology and Humanity

Dr Stephen Cave, Director of the Institute of Technology and Humanity

Dr Stephen Cave, Director of the Institute of Technology and Humanity

Prof Matthew Connelly, Director of the Centre for the Study of Existential Risk

Prof Matthew Connelly, Director of the Centre for the Study of Existential Risk

Prof Matthew Connelly, Director of the Centre for the Study of Existential Risk

Prof Matthew Connelly

Professor Matthew Connelly has been fascinated by the end of the world since he was a child. “I’ve found a way to explore my interest in doomsday thinking in every project I’ve worked on, from the history of population control to the Algerian war of independence,” said Connelly, a historian who arrived at CSER from Columbia University in the summer.

So when he took up the directorship of the Centre for the Study of Existential Risk (CSER), where “everyone is worrying about something that could end us all”, he felt right at home. “These are my people!”

Connelly’s last book, on declassified government secrets, published in February this year, in part explored the covert nature of the nuclear arms race, and the potential for nuclear war continues to worry him, as it did when he was a boy.    

“I think we are closer to nuclear war now than at any time since the end of the Cold War,” said Connelly. “You don’t need full thermonuclear war to cause catastrophe, even a relatively small-scale exchange between India and Pakistan could create vast soot clouds that cool temperatures and cause global famine.”

Connelly argues that secrecy is a barrier to planning for existential scenarios.

“Secrecy itself is a risk. We need to learn lessons from the early days of nuclear weapon development when it comes to AI, for example, but so much of that history is still classified.”

Prof Matthew Connelly

A recent conversation he had with a senior figure at Open AI – the company behind ChatGPT – suggests that key players in artificial intelligence share his concerns.

In fact, one potential project for Connelly involves using machine learning to build and explore a database of declassified documents to test the “mosaic theory”: whether millions of tiny pieces can actually give us the bigger picture. 

However, his next major project is – fittingly – a new history of the end of the world, a book idea he’s had in the background for almost a decade now, since his time directing a Hertog Foundation research program on planetary threats.

“The book will be presented via the four horsemen of apocalypse – war, pestilence, famine and death – only looking at nuclear war, pandemics, climate change, and lastly apocalyptic movements, or the people who want to bring on the end of days because they think the next world is preferable to this one.”

It is this book, and the thinking behind it, that played a role in bringing him to Cambridge, and to CSER, where he is now surrounded by researchers immersed in “X-risk”, covering everything from AI to engineered pandemics, climate change to super-volcanoes.

When in need of inspiration, Connelly takes to the streets, and is working his way around CSER’s new tour of Cambridge’s own history of X-risk landmarks, including former plague pits and nuclear bunkers. “Just walking around Cambridge means I keep running into the end of the world.”

Dr Dorian Peters

Dr Dorian Peters’ interest in the interaction between design and technology goes all the way back to a childhood Lite-Brite – the toy light-box with multi-coloured pegs. “I was a practising designer for a long time, so I’m always looking to turn amazing academic research into ways of actually changing how we do things,” she says.

At the Leverhulme Centre for the Future of Intelligence (CFI), Peters absorbs insights from the philosophers and social scientists that surround her, and works on translating these ideas into applications for industry.

While an Associate Director at CFI, Peters also spends two days a week at the Centre’s “spoke” institution of Imperial College, where she helps embed CFI concepts into engineering practices.

“I work a lot on areas such as wellbeing, and how we can incorporate psychological insights into design that move beyond cognition and perception, and help to benefit mental health,” says Peters.

With the coming wave of generative AI, well-being design needs to be in at the ground floor in a way it perhaps wasn't with social media, argues Peters. “Incentives in social media are so strongly tied up with the market, it creates a lot of tension. Consumers will stop wanting to use products that don’t respect their psychological needs.”

At CFI she works with academics researching everything from justice and discrimination to theories of mind, all of which is inseparable from psychological experience. “These are all aspects critical to wellbeing, and can help us to create systems that promote autonomy and connection.”

Philosophical notions of trust and autonomy are at the heart of Peters’ design work. “We aim to support human autonomy in the midst of rapid technological change. One simple example of a way to do this is by providing meaningful rationales when asking for things.”

“There’s much concern around consent and privacy in AI, but if you're going to ask somebody to provide their information, then offer them a really good reason, so they feel autonomous – not tricked or manipulated. This absolutely contributes to well-being. It’s an obvious point, perhaps, but one that is often lost.”

Peters also focuses on ideas of language and tone. “During the pandemic we saw controlling top-down messaging from some leaders, while others approached it more like we're part of a supportive community. It’s a motivational approach that better supports autonomy and that we want to enhance when developing technology.”

The CFI Masters course in AI Ethics and Society is now going into its fourth year, and Peters has been thrilled by the uptake. “It’s an incredibly exciting mix. We have high level representatives from most of the major tech companies, as well as lawyers, artists, journalists, and people from big city councils and the NHS.” The Centre has also just launched an MPhil in Ethics of AI, Data and Algorithms, and is keen to establish a PhD programme in the near future.

“This is the most genuinely interdisciplinary place I've ever worked. I've never had colleagues that are so automatically respectful of differing methods and epistemologies. Academia can be riddled with tribalism at times, but we just don’t have that,” says Peters.

“Now the Institute is launching, we can integrate all the centres even more, and almost model a kind of productive pluralism. Because there just isn't enough of it on display in our world right now.”

Dr Dorian Peters

Dr Dorian Peters, Associate Director of the Leverhulme Centre for the Future of Intelligence

Dr Dorian Peters, Associate Director of the Leverhulme Centre for the Future of Intelligence

Dr Dorian Peters, Associate Director of the Leverhulme Centre for the Future of Intelligence

Published 21 November 2023

Photography: Lloyd Mann and Nick Saffell

The text in this work is licensed under a Creative Commons Attribution 4.0 International License