🤖 AI Summary
This paper addresses the minimization of the difference of two submodular (DS) functions over both discrete and continuous domains—extending classical DS optimization beyond set functions to general discrete lattices (e.g., integer lattices) and smooth continuous spaces. Methodologically, we propose a novel variant of the DC (Difference-of-Convex) algorithm with theoretical convergence guarantees and broad domain applicability, integrating submodular analysis, DC programming, and discretization-based approximation techniques. Theoretically, we prove that any discrete function and any differentiable convex or concave function can be expressed as a DS decomposition, establishing the universality of the DS framework. Empirically, our method achieves significant improvements over state-of-the-art baselines on structured integer optimization tasks—including integer compressed sensing and integer least squares—demonstrating its efficacy and robustness. Overall, this work provides a unified, principled, and computationally efficient paradigm for structured integer optimization.
📝 Abstract
Submodular functions, defined on continuous or discrete domains, arise in numerous applications. We study the minimization of the difference of two submodular (DS) functions, over both domains, extending prior work restricted to set functions. We show that all functions on discrete domains and all smooth functions on continuous domains are DS. For discrete domains, we observe that DS minimization is equivalent to minimizing the difference of two convex (DC) functions, as in the set function case. We propose a novel variant of the DC Algorithm (DCA) and apply it to the resulting DC Program, obtaining comparable theoretical guarantees as in the set function case. The algorithm can be applied to continuous domains via discretization. Experiments demonstrate that our method outperforms baselines in integer compressive sensing and integer least squares.