Conversations with chatbots can reduce belief in conspiracy theories

0
26

It is an experience that some people have to make in their immediate personal environment: people who firmly believe in conspiracy theories can hardly be reached with factual counterarguments.

“It is almost a truism, but also an overly pessimistic view,” Researchers now write in the scientific journal “Science”. A relatively short conversation with an AI model can lead to a sharp and lasting decline in conspiracy belief, “even among people whose beliefs are deeply rooted.”

George W. Bush in the classroom

Conspiracy theories, such as those that a secret, malicious organization wields great power, are widely spread on social media and thus endanger the cohesion of democratic societies. Public information campaigns and discussions with close people who think differently usually do little to change this. The fact that the theories persist in many people despite inconsistent facts is often explained by the fact that they satisfy psychosocial needs for identity and group membership.

A research team led by psychologist Thomas Costello from the Massachusetts Institute of Technology and the American University examined over 2,000 self-proclaimed supporters of various conspiracy theories to see whether AI-driven language models such as GPT-4 Turbo can refute them. The models can also use their access to information on very specific issues and formulate counterarguments specifically for the conversation with the respective partner.

The participants spent an average of just under nine minutes talking to the chatbot. In the dialogue, the chatbot directly addressed the evidence presented by the human counterpart, such as alleged electoral fraud in the 2020 US presidential election or misinformation spread during the Covid-19 pandemic. One participant suspected that the US government was involved in the terrorist attacks on the World Trade Center on September 11, 2001. She had seen many documentaries on the subject, but what convinced her was video footage of then-US President George W. Bush, who remained outwardly calm when he was informed about the attack during a visit to a school class.

The language model thanked the speaker for the information, summarised it and asked for confirmation that it had summarised it correctly. First, it expressed understanding for questions and doubts in this context. Regarding the reaction of the then president, it referred to the situation: he had wanted to avoid frightening the children. However, it also mentioned that both critics and supporters later discussed this reaction a lot, but that one had to differentiate between the shock of an unexpected attack and a conspiracy. And it answered two questions from the test subject in detail. She then indicated her agreement with her own opening statement, which was presented again: only 40 percent instead of the 100 percent at the beginning.

Convincing counter-evidence

In the study, a professional fact-checker rated 99.2 percent of the statements made by the bot as “true” and 0.8 percent as “misleading” after the dialogues. However, he did not classify any of them as “false” and did not detect any political bias.

As Costello’s team reports, the AI-controlled dialogues reduced participants’ beliefs based on misinformation by an average of 20 percent compared to an initial value – regardless of the conspiracy theories put forward and the age of the participants. One in four participants subsequently no longer believed in their previously held position. This effect lasted for at least two months, the researchers report. This calls into question the often-proposed thesis that evidence and arguments are not effective.

“Many conspiracy believers were actually willing to revise their views when presented with convincing evidence to the contrary,” Costello is quoted as saying in a statement from the American University. In each round of discussions, the AI ​​provided pages of very detailed explanations as to why the theory in question was wrong. “It also knew how to be friendly and build a relationship with the participants,” says the psychologist.

Real change of opinion?

However, independent experts are critical of the measurements. “Such changes are more likely to be artifacts that arise from the experimental psychological situation,” said Nicole Krämer, a social psychologist at the University of Duisburg-Essen, to the Science Media Center. They do not necessarily mean a real change in opinion. However, it is “impressive” that the change in opinion is still detectable even after two months.

“Against the backdrop of the often discussed hallucinations of generative AI systems, it is astonishing that over 99 percent of the facts presented by the AI ​​system were correct,” says Krämer. However, Costello and his co-authors point out that the use of AI must continue to be done responsibly. The technology could also potentially be used to convince people of conspiracy theories.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here