Improved Approximate Regret for Decentralized Online Continuous Submodular Maximization via Reductions

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant gap in approximation regret bounds between decentralized online continuous submodular maximization (D-OCSM) and decentralized online convex optimization (D-OCO), as well as the difficulty of projection-free algorithms in matching centralized performance. To bridge this gap, the paper proposes two novel reduction techniques that transform D-OCSM into a D-OCO problem. These reductions achieve, for the first time, simultaneous improvement in regret bounds relative to convex optimization and a breakthrough in the performance of projection-free algorithms over general convex decision sets. Furthermore, when the feasible set is down-closed, the proposed methods closely approach the performance of centralized algorithms, substantially tightening the existing approximation regret bounds.

Technology Category

Application Category

📝 Abstract
To expand the applicability of decentralized online learning, previous studies have proposed several algorithms for decentralized online continuous submodular maximization (D-OCSM) -- a non-convex/non-concave setting with continuous DR-submodular reward functions. However, there exist large gaps between their approximate regret bounds and the regret bounds achieved in the convex setting. Moreover, if focusing on projection-free algorithms, which can efficiently handle complex decision sets, they cannot even recover the approximate regret bounds achieved in the centralized setting. In this paper, we first demonstrate that for D-OCSM over general convex decision sets, these two issues can be addressed simultaneously. Furthermore, for D-OCSM over downward-closed decision sets, we show that the second issue can be addressed while significantly alleviating the first issue. Our key techniques are two reductions from D-OCSM to decentralized online convex optimization (D-OCO), which can exploit D-OCO algorithms to improve the approximate regret of D-OCSM in these two cases, respectively.
Problem

Research questions and friction points this paper is trying to address.

decentralized online learning
continuous submodular maximization
approximate regret
projection-free algorithms
decision sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

decentralized online learning
continuous submodular maximization
approximate regret
reduction techniques
projection-free algorithms
🔎 Similar Papers
No similar papers found.
Yuanyu Wan
Yuanyu Wan
Zhejiang University
Machine LearningOnline LearningDistributed Optimization
Y
Yu Shen
School of Software Technology, Zhejiang University, Ningbo, China; State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China
Dingzhi Yu
Dingzhi Yu
Nanjing University
Machine LearningStochastic OptimizationOnline Learning
Bo Xue
Bo Xue
City University of Hong Kong
banditsstochastic optimization
M
Mingli Song
School of Software Technology, Zhejiang University, Ningbo, China; State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China