🤖 AI Summary
Users exhibit persistent confirmation bias and narrow information exposure when engaging with controversial topics. Method: This study proposes an LLM-driven multi-role debate system that employs multi-perspective prompt engineering to generate ideologically diverse virtual debaters; it dynamically integrates debate into the information retrieval process to actively induce cognitive conflict. Contribution/Results: Innovatively combining eye-tracking with mixed-methods user experiments, the study provides the first quantitative assessment of the system’s impact on belief malleability and cognitive openness. Results show that, compared to traditional search baselines, the system significantly reduces confirmation bias (p < 0.01), attenuates initial belief strength, broadens the scope of information exploration, and enhances interactional creativity—thereby empirically validating the efficacy of simulated collective intelligence in bias-resilient information design.
📝 Abstract
Large language models (LLMs) are enabling designers to give life to exciting new user experiences for information access. In this work, we present a system that generates LLM personas to debate a topic of interest from different perspectives. How might information seekers use and benefit from such a system? Can centering information access around diverse viewpoints help to mitigate thorny challenges like confirmation bias in which information seekers over-trust search results matching existing beliefs? How do potential biases and hallucinations in LLMs play out alongside human users who are also fallible and possibly biased? Our study exposes participants to multiple viewpoints on controversial issues via a mixed-methods, within-subjects study. We use eye-tracking metrics to quantitatively assess cognitive engagement alongside qualitative feedback. Compared to a baseline search system, we see more creative interactions and diverse information-seeking with our multi-persona debate system, which more effectively reduces user confirmation bias and conviction toward their initial beliefs. Overall, our study contributes to the emerging design space of LLM-based information access systems, specifically investigating the potential of simulated personas to promote greater exposure to information diversity, emulate collective intelligence, and mitigate bias in information seeking.