Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses fairness issues—particularly representational gaps (e.g., omission of minority perspectives) and input-order bias—in large language model (LLM)-generated summaries of large-scale public deliberations. To this end, we introduce the first human-annotated evaluation framework for deliberative summarization. We construct DeliberationBank, a benchmark dataset with fine-grained annotations across four dimensions: representativeness, informativeness, neutrality, and policy alignment. Leveraging this dataset, we fine-tune DeBERTa to develop DeliberationJudge, an automated evaluator. Experimental evaluation across 18 state-of-the-art LLMs reveals pervasive systematic biases in their outputs. DeliberationJudge achieves significantly higher agreement with human judgments (+23.6% Pearson correlation) and superior computational efficiency compared to LLM-as-a-judge baselines. Our work establishes a reliable, scalable, and policy-aware paradigm for fairness assessment in AI-generated deliberative summaries.

Technology Category

Application Category

📝 Abstract
Large-scale public deliberations generate thousands of free-form contributions that must be synthesized into representative and neutral summaries for policy use. While LLMs have been shown as a promising tool to generate summaries for large-scale deliberations, they also risk underrepresenting minority perspectives and exhibiting bias with respect to the input order, raising fairness concerns in high-stakes contexts. Studying and fixing these issues requires a comprehensive evaluation at a large scale, yet current practice often relies on LLMs as judges, which show weak alignment with human judgments. To address this, we present DeliberationBank, a large-scale human-grounded dataset with (1) opinion data spanning ten deliberation questions created by 3,000 participants and (2) summary judgment data annotated by 4,500 participants across four dimensions (representativeness, informativeness, neutrality, policy approval). Using these datasets, we train DeliberationJudge, a fine-tuned DeBERTa model that can rate deliberation summaries from individual perspectives. DeliberationJudge is more efficient and more aligned with human judgements compared to a wide range of LLM judges. With DeliberationJudge, we evaluate 18 LLMs and reveal persistent weaknesses in deliberation summarization, especially underrepresentation of minority positions. Our framework provides a scalable and reliable way to evaluate deliberation summarization, helping ensure AI systems are more representative and equitable for policymaking.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI fairness in summarizing large-scale public deliberations
Addressing underrepresentation of minority perspectives in LLM summaries
Developing reliable methods to assess deliberation summary quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created large-scale human-grounded dataset for evaluation
Fine-tuned DeBERTa model as deliberation summary judge
Framework enables scalable evaluation of AI summarization fairness
Shenzhe Zhu
Shenzhe Zhu
University of Toronto
Trustworthy AIAI Agent
S
Shu Yang
King Abdullah University of Science and Technology
M
Michiel A. Bakker
Massachusetts Institute of Technology
A
Alex Pentland
Massachusetts Institute of Technology, Stanford University
Jiaxin Pei
Jiaxin Pei
Stanford University, The University of Texas at Austin
Human-Centered AINLPHuman-Computer InteractionComputational Social Science