🤖 AI Summary
This study addresses the underrepresentation of participant questions in online negotiation settings. Method: We propose the first fairness-aware Justified Representation (JR) auditing framework for question selection, extending the JR concept to general utility settings. Integrating social choice theory, integer linear programming, and large language models (LLMs), we design an efficient auditing algorithm with time complexity $O(mnlog n)$ and deploy it in a real-world negotiation platform. Contribution/Results: Our work establishes the first computationally tractable JR audit for question-answering scenarios; reveals that LLM-generated summary questions exhibit partial representativeness but suffer from systematic biases; and empirically validates—across hundreds of cross-national negotiations—that the framework enhances transparency and inclusivity. It provides an interpretable, production-ready technical pathway for improving negotiation mechanisms.
📝 Abstract
A central feature of many deliberative processes, such as citizens'assemblies and deliberative polls, is the opportunity for participants to engage directly with experts. While participants are typically invited to propose questions for expert panels, only a limited number can be selected due to time constraints. This raises the challenge of how to choose a small set of questions that best represent the interests of all participants. We introduce an auditing framework for measuring the level of representation provided by a slate of questions, based on the social choice concept known as justified representation (JR). We present the first algorithms for auditing JR in the general utility setting, with our most efficient algorithm achieving a runtime of $O(mnlog n)$, where $n$ is the number of participants and $m$ is the number of proposed questions. We apply our auditing methods to historical deliberations, comparing the representativeness of (a) the actual questions posed to the expert panel (chosen by a moderator), (b) participants'questions chosen via integer linear programming, (c) summary questions generated by large language models (LLMs). Our results highlight both the promise and current limitations of LLMs in supporting deliberative processes. By integrating our methods into an online deliberation platform that has been used for over hundreds of deliberations across more than 50 countries, we make it easy for practitioners to audit and improve representation in future deliberations.