🤖 AI Summary
This work addresses the challenge of submodular maximization under matroid constraints when strict fairness requirements—such as those arising in the critical case ℓ=1—are imposed, where existing methods lack theoretical guarantees. The authors propose a novel algorithm that, for monotone submodular objectives, breaks the long-standing barrier of a fairness loss factor of 2. Specifically, for any constant ε > 0, the algorithm achieves a (1−ε)-approximation to perfect fairness, covering key scenarios including ℓ=1. The approach integrates randomized design, submodular optimization, and matroid theory, supported by refined probabilistic analysis and constraint relaxation to establish rigorous theoretical guarantees. Empirical evaluations on clustering, recommendation, and coverage tasks demonstrate that the method significantly improves fairness while maintaining a constant-factor approximation ratio.
📝 Abstract
Submodular maximization subject to matroid constraints is a central problem with many applications in machine learning. As algorithms are increasingly used in decision-making over datapoints with sensitive attributes such as gender or race, it is becoming crucial to enforce fairness to avoid bias and discrimination. Recent work has addressed the challenge of developing efficient approximation algorithms for fair matroid submodular maximization. However, the best algorithms known so far are only guaranteed to satisfy a relaxed version of the fairness constraints that loses a factor 2, i.e., the problem may ask for $\ell$ elements with a given attribute, but the algorithm is only guaranteed to find $\lfloor \ell/2 \rfloor$. In particular, there is no provable guarantee when $\ell=1$, which corresponds to a key special case of perfect matching constraints. In this work, we achieve a new trade-off via an algorithm that gets arbitrarily close to full fairness. Namely, for any constant $\varepsilon>0$, we give a constant-factor approximation to fair monotone matroid submodular maximization that in expectation loses only a factor $(1-\varepsilon)$ in the lower-bound fairness constraint. Our empirical evaluation on a standard suite of real-world datasets -- including clustering, recommendation, and coverage tasks -- demonstrates the practical effectiveness of our methods.