🤖 AI Summary
Verifying polynomial nonnegativity—a fundamental NP-hard problem with broad applications in control theory and robotics—is traditionally addressed via sum-of-squares (SOS) relaxation, which relies on large-scale semidefinite programming (SDP); however, SDP complexity grows quadratically with the dimension of the monomial basis. This paper introduces the first Transformer-based approach for SOS certification, proposing a learning-augmented basis selection framework: a data-driven model predicts an approximately minimal monomial basis to drastically reduce SDP size, complemented by a theoretically guaranteed fallback verification mechanism ensuring soundness. Evaluated on over 200 benchmarks, our method achieves over 100× speedup over state-of-the-art solvers and successfully solves previously intractable large-scale instances, significantly enhancing the practical scalability and usability of SOS programming.
📝 Abstract
Certifying nonnegativity of polynomials is a well-known NP-hard problem with direct applications spanning non-convex optimization, control, robotics, and beyond. A sufficient condition for nonnegativity is the Sum of Squares (SOS) property, i.e., it can be written as a sum of squares of other polynomials. In practice, however, certifying the SOS criterion remains computationally expensive and often involves solving a Semidefinite Program (SDP), whose dimensionality grows quadratically in the size of the monomial basis of the SOS expression; hence, various methods to reduce the size of the monomial basis have been proposed. In this work, we introduce the first learning-augmented algorithm to certify the SOS criterion. To this end, we train a Transformer model that predicts an almost-minimal monomial basis for a given polynomial, thereby drastically reducing the size of the corresponding SDP. Our overall methodology comprises three key components: efficient training dataset generation of over 100 million SOS polynomials, design and training of the corresponding Transformer architecture, and a systematic fallback mechanism to ensure correct termination, which we analyze theoretically. We validate our approach on over 200 benchmark datasets, achieving speedups of over $100 imes$ compared to state-of-the-art solvers and enabling the solution of instances where competing approaches fail. Our findings provide novel insights towards transforming the practical scalability of SOS programming.