🤖 AI Summary
This study addresses the challenge of enhancing procedural legitimacy in collective decision-making by fostering “losers’ consent”—the willingness of participants whose preferred outcomes are not adopted to nonetheless accept the result as fair. To this end, the work introduces an innovative approach that integrates AI-driven semi-structured interviews with interactive visualizations: AI elicits personal narratives about policy issues from individuals and dynamically displays others’ lived experiences alongside predicted policy stances to cultivate mutual understanding and trust. In a randomized controlled experiment with 181 participants, this method significantly increased perceived legitimacy of the decision process, trust in outcomes, and comprehension of opposing viewpoints—even among those whose preferences were unmet. By uniquely combining AI-mediated narrative collection with visualization and prioritizing procedural fairness over mere efficiency or scalability, this research offers a novel pathway for strengthening social cohesion.
📝 Abstract
AI is increasingly used to scale collective decision-making, but far less attention has been paid to how such systems can support procedural legitimacy, particularly the conditions shaping losers' consent: whether participants who do not get their preferred outcome still accept it as fair. We ask: (1) how can AI help ground collective decisions in participants' different experiences and beliefs, and (2) whether exposure to these experiences can increase trust, understanding, and social cohesion even when people disagree with the outcome. We built a system that uses a semi-structured AI interviewer to elicit personal experiences on policy topics and an interactive visualization that displays predicted policy support alongside those voiced experiences. In a randomized experiment (n = 181), interacting with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even though all participants encountered decisions that went against their stated preferences. Our hope is that the design and evaluation of this tool spurs future researchers to focus on how AI can help not only achieve scale and efficiency in democratic processes, but also increase trust and connection between participants.