Optimality-Informed Neural Networks for Solving Parametric Optimization Problems

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of solving parametric nonlinear constrained optimization problems at high frequency in real-time control and model-based design, this paper proposes an end-to-end neural network that directly learns the mapping from problem parameters to both primal and dual variables. Our method innovatively incorporates the Karush–Kuhn–Tucker (KKT) optimality residual into the loss function and employs constraint-aware output activations (e.g., Softplus or Clamp) to intrinsically enforce feasibility and optimality during training—thereby reducing data dependency and enabling high-accuracy dual variable prediction. Experiments demonstrate that, compared to a quadratic penalty baseline, our approach reduces constraint violations by 37%, decreases primal variable error by 22%, achieves dual variable prediction error below 0.05, and exhibits superior hyperparameter robustness.

Technology Category

Application Category

📝 Abstract
Many engineering tasks require solving families of nonlinear constrained optimization problems, parametrized in setting-specific variables. This is computationally demanding, particularly, if solutions have to be computed across strongly varying parameter values, e.g., in real-time control or for model-based design. Thus, we propose to learn the mapping from parameters to the primal optimal solutions and to their corresponding duals using neural networks, giving a dense estimation in contrast to gridded approaches. Our approach, Optimality-informed Neural Networks (OptINNs), combines (i) a KKT-residual loss that penalizes violations of the first-order optimality conditions under standard constraint qualifications assumptions, and (ii) problem-specific output activations that enforce simple inequality constraints (e.g., box-type/positivity) by construction. This design reduces data requirements, allows the prediction of dual variables, and improves feasibility and closeness to optimality compared to penalty-only training. Taking quadratic penalties as a baseline, since this approach has been previously proposed for the considered problem class in literature, our method simplifies hyperparameter tuning and attains tighter adherence to optimality conditions. We evaluate OptINNs on different nonlinear optimization problems ranging from low to high dimensions. On small problems, OptINNs match a quadratic-penalty baseline in primal accuracy while additionally predicting dual variables with low error. On larger problems, OptINNs achieve lower constraint violations and lower primal error compared to neural networks based on the quadratic-penalty method. These results suggest that embedding feasibility and optimality into the network architecture and loss can make learning-based surrogates more accurate, feasible, and data-efficient for parametric optimization.
Problem

Research questions and friction points this paper is trying to address.

Learning mapping from parameters to primal and dual optimal solutions
Reducing data needs and improving feasibility and optimality adherence
Simplifying hyperparameter tuning for parametric nonlinear constrained optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural networks learn parameter-to-solution mappings with KKT-residual loss
Problem-specific output activations enforce inequality constraints by construction
Embedding feasibility and optimality improves accuracy and data efficiency
🔎 Similar Papers
No similar papers found.