🤖 AI Summary
This paper studies data-driven least-squares problems with semidefinite programming (SDP) constraints, focusing on provably guaranteed spectral properties of solutions under constraint relaxation in finite-sample regimes. We propose a distribution-free, computationally efficient surrogate optimization framework: the original SDP is replaced by a smooth surrogate objective, optimized via standard gradient descent, and augmented with a verifiable spectral certification mechanism. Theoretically, under i.i.d. sampling, the eigenvalue deviation between the surrogate solution and the true SDP solution is bounded by ε with high probability; moreover, this spectral certificate converges uniformly as the sample size grows. We further derive an upper bound on the gradient iteration error. To our knowledge, this is the first non-SDP method for learning unknown quadratic functions that simultaneously achieves computational efficiency and provable spectral robustness.
📝 Abstract
We study data-driven least squares (LS) problems with semidefinite (SD) constraints and derive finite-sample guarantees on the spectrum of their optimal solutions when these constraints are relaxed. In particular, we provide a high confidence bound allowing one to solve a simpler program in place of the full SDLS problem, while ensuring that the eigenvalues of the resulting solution are $varepsilon$-close of those enforced by the SD constraints. The developed certificate, which consistently shrinks as the number of data increases, turns out to be easy-to-compute, distribution-free, and only requires independent and identically distributed samples. Moreover, when the SDLS is used to learn an unknown quadratic function, we establish bounds on the error between a gradient descent iterate minimizing the surrogate cost obtained with no SD constraints and the true minimizer.