Nonlinear Optimization with GPU-Accelerated Neural Network Constraints

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network–embedded nonlinear optimization suffers from high-dimensional decision variables and slow convergence. Method: We propose a GPU-accelerated gray-box optimization framework that treats pre-trained neural networks as non-differentiable yet evaluable/differentiable “black-box constraints.” By modeling only the reduced input–output mapping—without exposing internal neurons or architecture—we drastically lower variable dimensionality; GPU-parallelized forward propagation and reverse-mode automatic differentiation are tightly integrated into an interior-point solver for efficient gradient computation. Contribution/Results: Evaluated on MNIST adversarial example generation and power system security-constrained dispatch, our method reduces iteration counts by 37% on average and achieves 2.1–3.8× speedup in total solve time, while preserving solution accuracy. It establishes a scalable paradigm for neuro-optimization co-modeling.

Technology Category

Application Category

📝 Abstract
We propose a reduced-space formulation for optimizing over trained neural networks where the network's outputs and derivatives are evaluated on a GPU. To do this, we treat the neural network as a "gray box" where intermediate variables and constraints are not exposed to the optimization solver. Compared to the full-space formulation, in which intermediate variables and constraints are exposed to the optimization solver, the reduced-space formulation leads to faster solves and fewer iterations in an interior point method. We demonstrate the benefits of this method on two optimization problems: Adversarial generation for a classifier trained on MNIST images and security-constrained optimal power flow with transient feasibility enforced using a neural network surrogate.
Problem

Research questions and friction points this paper is trying to address.

Optimizing nonlinear problems with neural network constraints
Accelerating derivative evaluations using GPU computation
Improving efficiency over full-space optimization formulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-accelerated neural network constraint evaluation
Reduced-space formulation treating networks as gray boxes
Faster optimization with fewer interior-point iterations
🔎 Similar Papers
No similar papers found.
R
Robert Parker
Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Oscar Dowson
Oscar Dowson
Dowson Farms
OptimizationOperations Researchalgebraic modeling languages
N
Nicole LoGiudice
Texas A&M University, College Station, TX 77843, USA
M
Manuel Garcia
Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Russell Bent
Russell Bent
Scientist, Los Alamos National Laboratory
Operations ResearchArtificial IntelligencePower SystemsModeling and Simulation