🤖 AI Summary
This work addresses the lack of a unified approximation framework for monotone non-convex functions in machine learning and combinatorial optimization by introducing a novel first-order condition termed “γ-weakly θ-up-concavity,” which jointly characterizes DR-submodular and one-sided smooth (OSS) functions. By constructing a linear surrogate objective to upper-bound the non-convex function, the authors establish the first general optimization framework encompassing both function classes. Relying solely on problem parameters and the geometry of the feasible set, the method yields constant-factor approximation guarantees applicable to both offline maximization and online dynamic regret analysis. In the offline setting, it provides a unified approximation guarantee; in the online setting, it delivers both static and dynamic regret bounds. The approach recovers the optimal approximation ratio for DR-submodular maximization and significantly improves upon existing results for OSS functions under matroid constraints.
📝 Abstract
Optimizing monotone non-convex functions is a fundamental challenge across machine learning and combinatorial optimization. We introduce and study $\gamma$-weakly $\theta$-up-concavity, a novel first-order condition that characterizes a broad class of such functions. This condition provides a powerful unifying framework, strictly generalizing both DR-submodular functions and One-Sided Smooth (OSS) functions. Our central theoretical contribution demonstrates that $\gamma$-weakly $\theta$-up-concave functions are upper-linearizable: for any feasible point, we can construct a linear surrogate whose gains provably approximate the original non-linear objective. This approximation holds up to a constant factor, namely the approximation coefficient, dependent solely on $\gamma$, $\theta$, and the geometry of the feasible set. This linearizability yields immediate and unified approximation guarantees for a wide range of problems. Specifically, we obtain unified approximation guarantees for offline optimization as well as static and dynamic regret bounds in online settings via standard reductions to linear optimization. Moreover, our framework recovers the optimal approximation coefficient for DR-submodular maximization and significantly improves existing approximation coefficients for OSS optimization, particularly over matroid constraints.