Can AI make you less gullible or is it a conspiracy?

How AI can guide you out of conspiracy theory traps

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

AI chatbots may struggle with hallucinating made-up information, but new research has shown they might be useful for pushing back against unfounded and hallucinatory ideas in human minds. MIT Sloan and Cornell University scientists have published apaperinScienceclaiming that conversing with achatbotpowered by a large language model (LLM) reduces belief in conspiracies by about 20%.

To see how an AI chatbot might affect conspiratorial thinking, the scientist arranged for 2,190 participants to discuss conspiracy theories with a chatbot runningOpenAI’s GPT-4 Turbo model. Participants were asked to describe a conspiracy theory they found credible, including the reasons and evidence they believed supported it. The chatbot, prompted to be persuasive, provided responses tailored to these details. As they talked to the chatbot, it provided tailored counterarguments based on the participants' input. The study fielded the perennial AI hallucination issue with a professional fact-checker evaluating 128 claims made by the chatbot during the study. The claims were 99.2% accurate, which the researchers said was thanks to extensive online documentation of conspiracy theories represented in the model’s training data.

The idea of turning to AI for debunking conspiracy theories was that their deep information reservoirs and adaptable conversational approaches could reach people by personalizing the approach. Based on follow-up assessments ten days and two months after the first conversation, it worked. Most participants had a reduced belief in the conspiracy theories they had espoused " from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the Illuminati, to those pertaining to topical events such as COVID-19 and the 2020 US presidential election," the researchersfound.

Factbot Fun

Factbot Fun

The results were a real surprise to the researchers, who had hypothesized that people are largely unreceptive to evidence-based arguments debunking conspiracy theories. Instead, it shows that a well-designed AI chatbot can present counterarguments effectively, leading to a measurable change in belief. They concluded that AI tools could be a boon in combatting misinformation, albeit one that requires caution due to how it could also further mislead people with misinformation.

The study supports the value of projects with similar goals. For instance, fact-checking site Snopes recently released an AI tool calledFactBotto help people figure out whether something they’ve heard is real or not. FactBot uses Snopes' archive and generative AI to answer questions without having to comb through articles using more traditional search methods. Meanwhile,The Washington PostcreatedClimate Answersto clear up confusion on climate change issues,relying on its climate journalismto answer questions directly on the topic.

“Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit,” the researchers wrote. “Practically, by demonstrating the persuasive power of LLMs, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimizing opportunities for this technology to be used irresponsibly.”

You Might Also Like

Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.

Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.

Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

ChatGPT coded a game for me in seconds and I am simply astounded – and coders should be very worried

Top 3 things you have to try with the new ChatGPT search

Your doctor may have an AI assistant taking notes during your next Zoom call