🤖 AI Summary
This work addresses the tension between high communication overhead and limited approximation guarantees in submodular maximization under matroid constraints by proposing an Adaptive Threshold Continuous Greedy (ATCG) algorithm. ATCG integrates continuous greedy optimization, multilinear relaxation, and distributed active set management, featuring a progress-ratio-based adaptive thresholding mechanism that expands the active set only when candidate elements yield insufficient marginal gains. This strategy substantially reduces the transmission of feature embeddings while preserving objective values comparable to those of the full continuous greedy method. The algorithm achieves significantly lower communication complexity and provides a curvature-aware theoretical approximation guarantee. Empirical evaluation on a prototype selection task using the animal subset of CIFAR-10 demonstrates that ATCG effectively balances optimization performance and computational efficiency.
📝 Abstract
Submodular maximization under matroid constraints is a fundamental problem in combinatorial optimization with applications in sensing, data summarization, active learning, and resource allocation. While the Sequential Greedy (SG) algorithm achieves only a $\frac{1}{2}$-approximation due to irrevocable selections, Continuous Greedy (CG) attains the optimal $\bigl(1-\frac{1}{e}\bigr)$-approximation via the multilinear relaxation, at the cost of a progressively dense decision vector that forces agents to exchange feature embeddings for nearly every ground-set element. We propose \textit{ATCG} (\underline{A}daptive \underline{T}hresholded \underline{C}ontinuous \underline{G}reedy), which gates gradient evaluations behind a per-partition progress ratio $η_i$, expanding each agent's active set only when current candidates fail to capture sufficient marginal gain, thereby directly bounding which feature embeddings are ever transmitted. Theoretical analysis establishes a curvature-aware approximation guarantee with effective factor $τ_{\mathrm{eff}}=\max\{τ,1-c\}$, interpolating between the threshold-based guarantee and the low-curvature regime where \textit{ATCG} recovers the performance of CG. Experiments on a class-balanced prototype selection problem over a subset of the CIFAR-10 animal dataset show that \textit{ATCG} achieves objective values comparable to those of the full CG method while substantially reducing communication overhead through adaptive active-set expansion.