š¤ AI Summary
Existing differentiable optimization frameworks suffer from fragmented modeling interfaces, cumbersome parameter differentiation, and poor solver compatibility. This paper introduces the first general-purpose, differentiable optimization framework natively supporting parametric JuMP modeling. Grounded in the KarushāKuhnāTucker (KKT) conditions and under standard regularity assumptions, it unifies forward- and reverse-mode sensitivity analysis for both convex and nonconvex problems. Key contributions include: (1) first-class parameter abstractions enabling automatic, named-parameter differentiation across objectives and constraintsāeliminating low-level coefficient manipulation; (2) deep integration with DiffOpt.jl and the JuMP ecosystem while preserving solver agnosticism; and (3) empirical validation on economic dispatch, portfolio optimization, and robotic inverse kinematics, plus successful deployment in energy market bidding and end-to-end Sobolev trainingādemonstrating substantial improvements in the modelingāoptimizationālearning feedback loop efficiency.
š Abstract
Differentiating through constrained optimization problems is increasingly central to learning, control, and large-scale decision-making systems, yet practical integration remains challenging due to solver specialization and interface mismatches. This paper presents a general and streamlined framework-an updated DiffOpt.jl-that unifies modeling and differentiation within the Julia optimization stack. The framework computes forward - and reverse-mode solution and objective sensitivities for smooth, potentially nonconvex programs by differentiating the KKT system under standard regularity assumptions. A first-class, JuMP-native parameter-centric API allows users to declare named parameters and obtain derivatives directly with respect to them - even when a parameter appears in multiple constraints and objectives - eliminating brittle bookkeeping from coefficient-level interfaces. We illustrate these capabilities on convex and nonconvex models, including economic dispatch, mean-variance portfolio selection with conic risk constraints, and nonlinear robot inverse kinematics. Two companion studies further demonstrate impact at scale: gradient-based iterative methods for strategic bidding in energy markets and Sobolev-style training of end-to-end optimization proxies using solver-accurate sensitivities. Together, these results demonstrate that differentiable optimization can be deployed as a routine tool for experimentation, learning, calibration, and design-without deviating from standard JuMP modeling practices and while retaining access to a broad ecosystem of solvers.