Argumentative Experience: Reducing Confirmation Bias on Controversial Issues through LLM-Generated Multi-Persona Debates

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Users exhibit persistent confirmation bias and narrow information exposure when engaging with controversial topics. Method: This study proposes an LLM-driven multi-role debate system that employs multi-perspective prompt engineering to generate ideologically diverse virtual debaters; it dynamically integrates debate into the information retrieval process to actively induce cognitive conflict. Contribution/Results: Innovatively combining eye-tracking with mixed-methods user experiments, the study provides the first quantitative assessment of the system’s impact on belief malleability and cognitive openness. Results show that, compared to traditional search baselines, the system significantly reduces confirmation bias (p < 0.01), attenuates initial belief strength, broadens the scope of information exploration, and enhances interactional creativity—thereby empirically validating the efficacy of simulated collective intelligence in bias-resilient information design.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are enabling designers to give life to exciting new user experiences for information access. In this work, we present a system that generates LLM personas to debate a topic of interest from different perspectives. How might information seekers use and benefit from such a system? Can centering information access around diverse viewpoints help to mitigate thorny challenges like confirmation bias in which information seekers over-trust search results matching existing beliefs? How do potential biases and hallucinations in LLMs play out alongside human users who are also fallible and possibly biased? Our study exposes participants to multiple viewpoints on controversial issues via a mixed-methods, within-subjects study. We use eye-tracking metrics to quantitatively assess cognitive engagement alongside qualitative feedback. Compared to a baseline search system, we see more creative interactions and diverse information-seeking with our multi-persona debate system, which more effectively reduces user confirmation bias and conviction toward their initial beliefs. Overall, our study contributes to the emerging design space of LLM-based information access systems, specifically investigating the potential of simulated personas to promote greater exposure to information diversity, emulate collective intelligence, and mitigate bias in information seeking.
Problem

Research questions and friction points this paper is trying to address.

Reducing confirmation bias in controversial issues via LLM debates
Evaluating multi-persona debates for diverse information exposure
Assessing LLM biases and human interaction in information access
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated multi-persona debates
Eye-tracking for cognitive engagement
Simulated personas reduce confirmation bias
L
Li Shi
School of Information, University of Texas at Austin, USA
Houjiang Liu
Houjiang Liu
PhD Candidate, School of Information, University of Texas at Austin
DesignHuman-Computer Interaction
Y
Yian Wong
Department of Computer Science, University of Texas at Austin, USA
U
Utkarsh Mujumdar
School of Information, University of Texas at Austin, USA
D
Dan Zhang
School of Information, University of Texas at Austin, USA
Jacek Gwizdka
Jacek Gwizdka
Associate Professor, Information eXperience Lab, School of Information, University of Texas at
Human-Information InteractionInteractive Information RetrievalEye-trackingNeuroIS
M
Matthew Lease
School of Information, University of Texas at Austin, USA