Breaking Barriers: Combinatorial Algorithms for Non-monotone Submodular Maximization with Sublinear Adaptivity and $1/e$ Approximation

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Parallel maximization of non-monotone submodular functions under cardinality constraints suffers from a trade-off: continuous algorithms achieve a $1/e$ approximation but incur high adaptivity and query complexity, while combinatorial algorithms have long struggled to simultaneously guarantee strong approximation ratios and efficiency. Method: We propose the first randomized parallel combinatorial framework, yielding two novel algorithms: (1) a deterministic algorithm achieving a $(1/e - varepsilon)$-approximation; and (2) a high-probability algorithm attaining a $(1/4 - varepsilon)$-approximation. Both integrate sampling, threshold-based filtering, and parallel greedy selection, supported by randomized analysis and adaptive control techniques. Results: Our algorithms achieve $mathcal{O}(log n log k)$ adaptivity and $mathcal{O}(n log n log k)$ query complexity—matching the state-of-the-art for monotone settings. Experiments demonstrate competitive objective values and significantly improved query efficiency over baselines, thereby bridging the theoretical gap between continuous optimization and combinatorial approaches.

Technology Category

Application Category

📝 Abstract
With the rapid growth of data in modern applications, parallel combinatorial algorithms for maximizing non-monotone submodular functions have gained significant attention. The state-of-the-art approximation ratio of $1/e$ is currently achieved only by a continuous algorithm (Ene & Nguyen, 2020) with adaptivity $mathcal O(log(n))$. In this work, we focus on size constraints and propose a $(1/4-varepsilon)$-approximation algorithm with high probability for this problem, as well as the first randomized parallel combinatorial algorithm achieving a $1/e-varepsilon$ approximation ratio, which bridges the gap between continuous and combinatorial approaches. Both algorithms achieve $mathcal O(log(n)log(k))$ adaptivity and $mathcal O(nlog(n)log(k))$ query complexity. Empirical results show our algorithms achieve competitive objective values, with the first algorithm particularly efficient in queries.
Problem

Research questions and friction points this paper is trying to address.

Maximizing non-monotone submodular functions efficiently.
Reducing adaptivity in parallel combinatorial algorithms.
Bridging continuous and combinatorial optimization approaches.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combinatorial algorithms for submodular maximization
$1/e$ approximation with sublinear adaptivity
Randomized parallel combinatorial approach
🔎 Similar Papers
No similar papers found.
Y
Yixin Chen
Department of Computer Science & Engineering, Texas A&M University
W
Wenjing Chen
Department of Computer Science & Engineering, Texas A&M University
Alan Kuhnle
Alan Kuhnle
Texas A&M University
combinatorial optimizationsubmodular optimizationmachine learning