π€ AI Summary
Online political discussions frequently suffer from opinion homogenization, heightened polarization, and diminished legitimacy due to user self-selection and platform algorithmic biases. To address this, we propose a large language model (LLM)-based conversational agent intervention framework that dynamically identifies argumentative gaps in real time and introduces missing perspectives to enhance discursive diversity. Unlike conventional content moderation or recommendation approaches, our method transparently discloses the AIβs identityβa design choice validated through a double-blind randomized controlled experiment showing no attenuation of its effectiveness in broadening viewpoint representation. We introduce a novel dynamic intervention framework grounded in argument coverage metrics and a hybrid evaluation system integrating objective and subjective measures. Results demonstrate statistically significant improvements in argument coverage breadth (p < 0.01), with objective metrics increasing by 37% and inter-rater agreement on subjective assessments reaching 0.82.
π Abstract
A wide range of participation is essential for democracy, as it helps prevent the dominance of extreme views, erosion of legitimacy, and political polarization. However, engagement in online political discussions often features a limited spectrum of views due to high levels of self-selection and the tendency of online platforms to facilitate exchanges primarily among like-minded individuals. This study examines whether an LLM-based bot can widen the scope of perspectives expressed by participants in online discussions through two pre-registered randomized experiments conducted in a chatroom. We evaluate the impact of a bot that actively monitors discussions, identifies missing arguments, and introduces them into the conversation. The results indicate that our bot significantly expands the range of arguments, as measured by both objective and subjective metrics. Furthermore, disclosure of the bot as AI does not significantly alter these effects. These findings suggest that LLM-based moderation tools can positively influence online political discourse.