AI tool helps people with opposing views find common ground

A large language model can help groups to reach a consensus by producing statements that are clearer and fairer than those written by humans.

A large group of work colleagues sat in a circle hold a strategy discussion.
In some cases, AI does a better job than a human mediator at summarizing the collective opinion of a group. Credit: Rawpixel Ltd/Getty

A chatbot-like tool powered by artificial intelligence (AI) can help people with differing views to find areas of agreement, an experiment with online discussion groups has shown.

The model, developed by Google DeepMind in London, was able to synthesize diverging opinions and produce summaries of each group’s position that took different perspectives into account. Participants preferred the AI-generated statements to ones written by human mediators, suggesting that such tools could be used to help support complex deliberations. The study was published in Science on 17 October .

You can see it as a sort of proof of concept that you can use AI, and, specifically, large language models, to fulfil part of the function that is fulfilled by current citizens’ assemblies and deliberative polls,” says Christopher Summerfield, a co-author of the study and research director at the UK AI Safety Institute. “People need to find common ground because collective action requires agreement.”

Democracy, at its best, rests upon the free and equal exchange of views among people with diverse perspectives. Collective deliberation can be effectively supported by structured events, such as citizens’ assemblies, but such events are expensive, are difficult to scale, and can result in voices being heard unequally. This study investigates the potential of artificial intelligence (AI) to overcome these limitations, using AI mediation to help people find common ground on complex social and
political issues.

Compromise machine

Democratic initiatives such as citizens’ assemblies, in which groups of people are asked to share their opinions on public-policy issues, help to ensure that politicians hear a wide variety of perspectives. But scaling up these initiatives can be tricky, and the discussions are typically restricted to relatively small groups to ensure that all voices are heard.

Intrigued by research into the potential of large language models (LLMs) to support such discussions, Summerfield and his colleagues came up with a study to assess whether AI could help people with opposing viewpoints to reach a compromise.

They used a fine-tuned version of the pretrained DeepMind LLM Chinchilla, and named their system the Habermas Machine, after the philosopher Jürgen Habermas, who developed a theory about how rational discussion can help to solve conflict.

In one of the experiments to test their model, the researchers recruited 439 UK residents and sorted them into smaller groups of 6 people. The group members discussed three questions related to UK public policy, sharing their personal opinions on each topic. These opinions were then fed to the AI, which generated overarching statements that combined all participants’ viewpoints. Participants were able to rank each statement and share critiques on them, and the AI incorporated these into a final summary of the group’s collective view.
This AI chatbot got conspiracy theorists to question their convictions

The model is trained to try to produce a statement which will garner maximum endorsement by a group of people who have volunteered their opinions,” says Summerfield. “Because the model learns what your preferences are over these statements, it can then produce a statement which is most likely to satisfy everyone.

Lost connections

Actually applying these technologies into deliberative experiments and processes is really good to see,” says Sammy Mckinney, who studies deliberative democracy and its intersections with AI at the University of Cambridge, UK. But he adds that researchers should carefully consider the potential impacts of AI on the human aspect of debate. “A key reason to support citizen deliberation is that it creates certain kinds of spaces for people to relate to each other,” he says. “By removing more human contact and human facilitation, what are we losing?

Summerfield acknowledges the limitations associated with AI technologies such as the Habermas Machine. “We did not train the model to try to intervene in the deliberation,” he says, which means that the model’s statements could include extremist or other problematic beliefs if participants expressed them. He adds that rigorous research into the impact AI has on society is crucial to understanding its value.

Proceeding with caution seems important to me,” says Mckinney, “and then taking steps to, where possible, mitigate those concerns.


Related information can be found at the NYTimes: This Chatbot Pulls People Away From Conspiracy Theories , discussing how many people doubted or abandoned false beliefs after a short conversation with the DebunkBot.

Leave a comment