Generating Fair Consensus Statements with Social Choice on Token-Level MDPs

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based approaches for generating consensus statements lack intrinsic structural guarantees, making it difficult to ensure provably fair aggregation of free-form opinions. Method: We formulate consensus generation as a multi-objective, token-level Markov decision process (MDP), and—novel in text generation—integrate core concepts from social choice theory: core stability and proportional fairness. Our fair consensus mechanism jointly maximizes Nash welfare and minimizes worst-agent alignment loss. It employs personalized language models as agent policies, coupled with implicit Q-function estimation and search-based optimization. Contribution/Results: This egalitarian welfare-driven generation strategy significantly improves worst-case agent alignment performance, outperforming state-of-the-art baselines across diverse evaluation metrics. By grounding consensus synthesis in formal fairness axioms and sequential decision-making, our framework provides both theoretical rigor and empirical gains in equitable opinion aggregation.

Technology Category

Application Category

📝 Abstract
Current frameworks for consensus statement generation with large language models lack the inherent structure needed to provide provable fairness guarantees when aggregating diverse free-form opinions. We model the task as a multi-objective, token-level Markov Decision Process (MDP), where each objective corresponds to an agent's preference. Token-level rewards for each agent are derived from their policy (e.g., a personalized language model). This approach utilizes the finding that such policies implicitly define optimal Q-functions, providing a principled way to quantify rewards at each generation step without a value function (Rafailov et al., 2024). This MDP formulation creates a formal structure amenable to analysis using principles from social choice theory. We propose two approaches grounded in social choice theory. First, we propose a stochastic generation policy guaranteed to be in the ex-ante core, extending core stability concepts from voting theory to text generation. This policy is derived from an underlying distribution over complete statements that maximizes proportional fairness (Nash Welfare). Second, for generating a single statement, we target the maximization of egalitarian welfare using search algorithms within the MDP framework. Empirically, experiments using language models to instantiate agent policies show that search guided by the egalitarian objective generates consensus statements with improved worst-case agent alignment compared to baseline methods, including the Habermas Machine (Tessler et al., 2024).
Problem

Research questions and friction points this paper is trying to address.

Generating consensus statements with provable fairness guarantees
Aggregating diverse opinions using token-level MDP formulation
Applying social choice theory to ensure fair text generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level MDP models multi-agent text generation
Social choice theory ensures provable fairness guarantees
Search algorithms maximize egalitarian welfare for consensus
🔎 Similar Papers
No similar papers found.