Fair Multi-agent Persuasion with Submodular Constraints

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies multi-agent resource allocation under submodular constraints within the Bayesian persuasion framework: an informed mediator designs signaling schemes and tie-breaking rules to guide rational agents toward socially optimal decisions under posterior beliefs, while ensuring agent utilities—defined as allocated resources multiplied by true values—satisfy majorization-based fairness, i.e., approximate optimization of all symmetric, monotone, concave utility functions. Innovatively linking the utility vector to the base polytope structure, we propose the first signaling mechanism achieving logarithmic-factor fairness approximation. It attains arbitrary additive accuracy in weakly polynomial time, with an approximation ratio strictly superior to existing linear-approximation methods. Our approach provides both theoretical optimality guarantees and efficient algorithms for fair persuasion under submodular constraints.

Technology Category

Application Category

📝 Abstract
We study the problem of selection in the context of Bayesian persuasion. We are given multiple agents with hidden values (or quality scores), to whom resources must be allocated by a welfare-maximizing decision-maker. An intermediary with knowledge of the agents'values seeks to influence the outcome of the selection by designing informative signals and providing tie-breaking policies, so that when the receiver maximizes welfare over the resulting posteriors, the expected utilities of the agents (where utility is defined as allocation times value) achieve certain fairness properties. The fairness measure we will use is majorization, which simultaneously approximately maximizes all symmetric, monotone, concave functions of the utilities. We consider the general setting where the allocation to the agents needs to respect arbitrary submodular constraints, as given by the corresponding polymatroid. We present a signaling policy that, under a mild bounded rationality assumption on the receiver, achieves a logarithmically approximate majorized policy in this setting. The approximation ratio is almost best possible, and that significantly outperforms generic results that only yield linear approximations. A key component of our result is a structural characterization showing that the vector of agent utilities for a given signaling policy defines the base polytope of a different polymatroid, a result that may be of independent interest. In addition, we show that an arbitrarily good additive approximation to this vector can be produced in (weakly) polynomial time via the multiplicative weights update method.
Problem

Research questions and friction points this paper is trying to address.

Designing persuasive signals to ensure fair resource allocation among agents
Achieving fairness under submodular constraints in Bayesian persuasion settings
Developing efficient algorithms for approximate majorization in multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Signaling policy with submodular constraints for fairness
Logarithmic approximation via polymatroid base polytope characterization
Multiplicative weights method for polynomial-time additive approximation
🔎 Similar Papers
No similar papers found.
Y
Yannan Bai
Computer Science Department, Duke University, Durham NC 27708-0129
Kamesh Munagala
Kamesh Munagala
Professor of Computer Science, Duke University
AlgorithmsAlgorithmic Game TheoryDecision SciencesBig Data
Yiheng Shen
Yiheng Shen
Computer Science, Duke University
Economics and Computation
D
Davidson Zhu
Computer Science Department, Duke University, Durham NC 27708-0129