Discretely Beyond $1/e$: Guided Combinatorial Algorithms for Submodular Maximization

📅 2024-05-08
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the discrete maximization of non-monotone submodular functions under cardinality and matroid constraints, breaking the long-standing $1/e approx 0.367$ approximation barrier for combinatorial algorithms. We propose the **guided randomized greedy framework**, integrating fast local search while avoiding costly continuous extensions. We further design **deterministic and nearly-linear-time variants** that preserve the approximation guarantees. Under cardinality constraints, our algorithm achieves a $0.385$ approximation ratio—improving upon the previous best $0.367$; under matroid constraints, it attains $0.305$, surpassing $0.281$. The deterministic variant achieves $0.377$ with nearly-linear time complexity. To our knowledge, this is the first purely combinatorial algorithm—requiring no continuous optimization—that strictly exceeds the $1/e$ barrier, significantly enhancing scalability and practical applicability.

Technology Category

Application Category

📝 Abstract
For constrained, not necessarily monotone submodular maximization, all known approximation algorithms with ratio greater than $1/e$ require continuous ideas, such as queries to the multilinear extension of a submodular function and its gradient, which are typically expensive to simulate with the original set function. For combinatorial algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm of Buchbinder et al. [9]: $1/e approx 0.367$ for size constraint and $0.281$ for the matroid constraint in $mathcal O (kn)$ queries, where $k$ is the rank of the matroid. In this work, we develop the first combinatorial algorithms to break the $1/e$ barrier: we obtain approximation ratio of $0.385$ in $mathcal O (kn)$ queries to the submodular set function for size constraint, and $0.305$ for a general matroid constraint. These are achieved by guiding the randomized greedy algorithm with a fast local search algorithm. Further, we develop deterministic versions of these algorithms, maintaining the same ratio and asymptotic time complexity. Finally, we develop a deterministic, nearly linear time algorithm with ratio $0.377$.
Problem

Research questions and friction points this paper is trying to address.

Submodular maximization algorithms
Breaking 1/e approximation barrier
Combinatorial vs continuous methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided randomized greedy algorithm
Fast local search integration
Deterministic linear time algorithm
🔎 Similar Papers
No similar papers found.
Yixing Chen
Yixing Chen
Department of Computer Science & Engineering, Texas A&M University
A
Ankur Nath
Department of Computer Science & Engineering, Texas A&M University
C
Chunli Peng
Department of Computer Science & Engineering, Texas A&M University
Alan Kuhnle
Alan Kuhnle
Texas A&M University
combinatorial optimizationsubmodular optimizationmachine learning