Question the Questions: Auditing Representation in Online Deliberative Processes

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the underrepresentation of participant questions in online negotiation settings. Method: We propose the first fairness-aware Justified Representation (JR) auditing framework for question selection, extending the JR concept to general utility settings. Integrating social choice theory, integer linear programming, and large language models (LLMs), we design an efficient auditing algorithm with time complexity $O(mnlog n)$ and deploy it in a real-world negotiation platform. Contribution/Results: Our work establishes the first computationally tractable JR audit for question-answering scenarios; reveals that LLM-generated summary questions exhibit partial representativeness but suffer from systematic biases; and empirically validates—across hundreds of cross-national negotiations—that the framework enhances transparency and inclusivity. It provides an interpretable, production-ready technical pathway for improving negotiation mechanisms.

Technology Category

Application Category

📝 Abstract
A central feature of many deliberative processes, such as citizens'assemblies and deliberative polls, is the opportunity for participants to engage directly with experts. While participants are typically invited to propose questions for expert panels, only a limited number can be selected due to time constraints. This raises the challenge of how to choose a small set of questions that best represent the interests of all participants. We introduce an auditing framework for measuring the level of representation provided by a slate of questions, based on the social choice concept known as justified representation (JR). We present the first algorithms for auditing JR in the general utility setting, with our most efficient algorithm achieving a runtime of $O(mnlog n)$, where $n$ is the number of participants and $m$ is the number of proposed questions. We apply our auditing methods to historical deliberations, comparing the representativeness of (a) the actual questions posed to the expert panel (chosen by a moderator), (b) participants'questions chosen via integer linear programming, (c) summary questions generated by large language models (LLMs). Our results highlight both the promise and current limitations of LLMs in supporting deliberative processes. By integrating our methods into an online deliberation platform that has been used for over hundreds of deliberations across more than 50 countries, we make it easy for practitioners to audit and improve representation in future deliberations.
Problem

Research questions and friction points this paper is trying to address.

Selecting representative questions from many participants for expert panels
Auditing representation quality in online deliberative processes efficiently
Comparing moderator-chosen, algorithm-selected, and AI-generated question representativeness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auditing framework using justified representation concept
Efficient algorithm with O(mn log n) runtime
Integration into online deliberation platform worldwide
🔎 Similar Papers
No similar papers found.
S
Soham De
FAIR at Meta, University of Washington
L
Lodewijk Gelauff
Stanford University
A
Ashish Goel
Stanford University
S
S. Milli
FAIR at Meta
Ariel Procaccia
Ariel Procaccia
Alfred and Rebecca Lin Professor of Computer Science, Harvard University
Artificial IntelligenceAlgorithmic Game TheoryComputational Social Choice
A
Alice Siu
Stanford University