Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses online statistical inference for stochastic optimization problems with both equality and inequality constraints—arising in constrained M-estimation, physics-informed modeling, safe reinforcement learning, and algorithmic fairness. Existing methods suffer from inference bias due to step-direction distortion, reliance on projection operators, and difficulty handling nonlinear constraints. To overcome these limitations, we propose the first fully online, projection-free sequential quadratic programming (SSQP) method. Key contributions include: (1) achieving primal–dual asymptotically minimax-optimal inference for constrained stochastic optimization; (2) introducing a momentum-based gradient sliding average to eliminate directional bias; (3) employing linearized constraints and quadratic approximations of the objective to avoid infeasible projections; and (4) designing a plug-in covariance estimator enabling real-time confidence interval construction. We establish global almost-sure convergence and local asymptotic normality. Experiments on synthetic data, generalized linear models, and portfolio optimization demonstrate substantial improvements over baselines.

Technology Category

Application Category

📝 Abstract
We study online statistical inference for the solutions of stochastic optimization problems with equality and inequality constraints. Such problems are prevalent in statistics and machine learning, encompassing constrained $M$-estimation, physics-informed models, safe reinforcement learning, and algorithmic fairness. We develop a stochastic sequential quadratic programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing a quadratic approximation of the objective and a linear approximation of the constraints. Despite having access to unbiased estimates of population gradients, a key challenge in constrained stochastic problems lies in dealing with the bias in the step direction. As such, we apply a momentum-style gradient moving-average technique within SSQP to debias the step. We show that our method achieves global almost-sure convergence and exhibits local asymptotic normality with an optimal primal-dual limiting covariance matrix in the sense of H'ajek and Le Cam. In addition, we provide a plug-in covariance matrix estimator for practical inference. To our knowledge, the proposed SSQP method is the first fully online method that attains primal-dual asymptotic minimax optimality without relying on projection operators onto the constraint set, which are generally intractable for nonlinear problems. Through extensive experiments on benchmark nonlinear problems, as well as on constrained generalized linear models and portfolio allocation problems using both synthetic and real data, we demonstrate superior performance of our method, showing that the method and its asymptotic behavior not only solve constrained stochastic problems efficiently but also provide valid and practical online inference in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Online inference for constrained stochastic optimization problems
Debiasing step direction in constrained stochastic optimization
Achieving primal-dual asymptotic minimax optimality without projection operators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic sequential quadratic programming for online inference
Momentum-style gradient moving-average to debias steps
Plug-in covariance estimator for practical online inference
🔎 Similar Papers
No similar papers found.