Tractable hierarchies of convex relaxations for polynomial optimization on the nonnegative orthant

📅 2022-09-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses polynomial optimization problems (POPs) over the nonnegative orthant. We propose a novel semidefinite relaxation hierarchy grounded in an extended Dickinson–Povh variant of Pólya’s Positivstellensatz: by applying an even-symmetric variable transformation and square parameterization, we construct a scalable relaxation sequence with controllable matrix dimensions. When the feasible region possesses a nonempty interior, the hierarchy guarantees convergence to the global optimum at rate $O(varepsilon^{-c})$. Its key contribution is the first method enabling *arbitrary pre-specification* of the maximum matrix dimension in the relaxations—thereby decoupling scalability from problem degree. Experiments on neural network robustness verification and computation of the largest positive singular value demonstrate that our approach achieves higher accuracy and up to hundreds-fold speedup over standard moment–sum-of-squares (moment-SOS) methods.
📝 Abstract
We consider polynomial optimization problems (POP) on a semialgebraic set contained in the nonnegative orthant (every POP on a compact set can be put in this format by a simple translation of the origin). Such a POP can be converted to an equivalent POP by squaring each variable. Using even symmetry and the concept of factor width, we propose a hierarchy of semidefinite relaxations based on the extension of P'olya's Positivstellensatz by Dickinson-Povh. As its distinguishing and crucial feature, the maximal matrix size of each resulting semidefinite relaxation can be chosen arbitrarily and in addition, we prove that the sequence of values returned by the new hierarchy converges to the optimal value of the original POP at the rate $O(varepsilon^{-c})$ if the semialgebraic set has nonempty interior. When applied to (i) robustness certification of multi-layer neural networks and (ii) computation of positive maximal singular values, our method based on P'olya's Positivstellensatz provides better bounds and runs several hundred times faster than the standard Moment-SOS hierarchy.
Problem

Research questions and friction points this paper is trying to address.

Develops convex relaxations for polynomial optimization on nonnegative orthant
Improves bounds and speed for neural network robustness certification
Enhances computation of positive maximal singular values efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchy of semidefinite relaxations for POP
Utilizes even symmetry and factor width
Extends Pólya's Positivstellensatz for optimization
🔎 Similar Papers
No similar papers found.
N
N. Mai
Institute of Mathematics, Vietnam Academy of Science and Technology
Victor Magron
Victor Magron
CNRS
Polynomial optimizationquantum informationdynamical systemsdeep learningoptimal powerflow
J
J. Lasserre
CNRS; LAAS, Université de Toulouse; LAAS
K
K. Toh
Department of Mathematics, National University of Singapore