🤖 AI Summary
This work addresses the challenge of infeasible projection operations in distributed online DR-submodular optimization. We propose the first decentralized algorithm that requires neither centralized coordination nor explicit projection steps. Our method leverages upper-linearizable function modeling, enabling unified treatment of both monotone and non-monotone upper-concave objective functions under general convex constraints, and accommodating first-order, zero-order, and various bandit feedback settings. Theoretically, the algorithm achieves an $O(T^{1- heta/2})$ dynamic regret bound, $O(T^ heta)$ communication complexity, and $O(T^{2 heta})$ linear optimization oracle calls (for $ heta in (0,1]$). To our knowledge, this is the first projection-free framework extended to the broader class of upper-concave functions, thereby overcoming the reliance of conventional DR-submodular optimization on strong structural assumptions and centralized projections.
📝 Abstract
We introduce a novel framework for decentralized projection-free optimization, extending projection-free methods to a broader class of upper-linearizable functions. Our approach leverages decentralized optimization techniques with the flexibility of upper-linearizable function frameworks, effectively generalizing traditional DR-submodular function optimization. We obtain the regret of $O(T^{1- heta/2})$ with communication complexity of $O(T^{ heta})$ and number of linear optimization oracle calls of $O(T^{2 heta})$ for decentralized upper-linearizable function optimization, for any $0le heta le 1$. This approach allows for the first results for monotone up-concave optimization with general convex constraints and non-monotone up-concave optimization with general convex constraints. Further, the above results for first order feedback are extended to zeroth order, semi-bandit, and bandit feedback.