Concentration inequalities for semidefinite least squares based on data

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies data-driven least-squares problems with semidefinite programming (SDP) constraints, focusing on provably guaranteed spectral properties of solutions under constraint relaxation in finite-sample regimes. We propose a distribution-free, computationally efficient surrogate optimization framework: the original SDP is replaced by a smooth surrogate objective, optimized via standard gradient descent, and augmented with a verifiable spectral certification mechanism. Theoretically, under i.i.d. sampling, the eigenvalue deviation between the surrogate solution and the true SDP solution is bounded by ε with high probability; moreover, this spectral certificate converges uniformly as the sample size grows. We further derive an upper bound on the gradient iteration error. To our knowledge, this is the first non-SDP method for learning unknown quadratic functions that simultaneously achieves computational efficiency and provable spectral robustness.

Technology Category

Application Category

📝 Abstract
We study data-driven least squares (LS) problems with semidefinite (SD) constraints and derive finite-sample guarantees on the spectrum of their optimal solutions when these constraints are relaxed. In particular, we provide a high confidence bound allowing one to solve a simpler program in place of the full SDLS problem, while ensuring that the eigenvalues of the resulting solution are $varepsilon$-close of those enforced by the SD constraints. The developed certificate, which consistently shrinks as the number of data increases, turns out to be easy-to-compute, distribution-free, and only requires independent and identically distributed samples. Moreover, when the SDLS is used to learn an unknown quadratic function, we establish bounds on the error between a gradient descent iterate minimizing the surrogate cost obtained with no SD constraints and the true minimizer.
Problem

Research questions and friction points this paper is trying to address.

Concentration inequalities for semidefinite constrained least squares problems
Finite-sample guarantees on optimal solution spectrum with relaxed constraints
Error bounds between unconstrained gradient descent and true minimizer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semidefinite least squares with relaxed constraints
Distribution-free high confidence eigenvalue bounds
Gradient descent error bounds for quadratic learning
🔎 Similar Papers
No similar papers found.