A General and Streamlined Differentiable Optimization Framework

šŸ“… 2025-10-29
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Existing differentiable optimization frameworks suffer from fragmented modeling interfaces, cumbersome parameter differentiation, and poor solver compatibility. This paper introduces the first general-purpose, differentiable optimization framework natively supporting parametric JuMP modeling. Grounded in the Karush–Kuhn–Tucker (KKT) conditions and under standard regularity assumptions, it unifies forward- and reverse-mode sensitivity analysis for both convex and nonconvex problems. Key contributions include: (1) first-class parameter abstractions enabling automatic, named-parameter differentiation across objectives and constraints—eliminating low-level coefficient manipulation; (2) deep integration with DiffOpt.jl and the JuMP ecosystem while preserving solver agnosticism; and (3) empirical validation on economic dispatch, portfolio optimization, and robotic inverse kinematics, plus successful deployment in energy market bidding and end-to-end Sobolev training—demonstrating substantial improvements in the modeling–optimization–learning feedback loop efficiency.

Technology Category

Application Category

šŸ“ Abstract
Differentiating through constrained optimization problems is increasingly central to learning, control, and large-scale decision-making systems, yet practical integration remains challenging due to solver specialization and interface mismatches. This paper presents a general and streamlined framework-an updated DiffOpt.jl-that unifies modeling and differentiation within the Julia optimization stack. The framework computes forward - and reverse-mode solution and objective sensitivities for smooth, potentially nonconvex programs by differentiating the KKT system under standard regularity assumptions. A first-class, JuMP-native parameter-centric API allows users to declare named parameters and obtain derivatives directly with respect to them - even when a parameter appears in multiple constraints and objectives - eliminating brittle bookkeeping from coefficient-level interfaces. We illustrate these capabilities on convex and nonconvex models, including economic dispatch, mean-variance portfolio selection with conic risk constraints, and nonlinear robot inverse kinematics. Two companion studies further demonstrate impact at scale: gradient-based iterative methods for strategic bidding in energy markets and Sobolev-style training of end-to-end optimization proxies using solver-accurate sensitivities. Together, these results demonstrate that differentiable optimization can be deployed as a routine tool for experimentation, learning, calibration, and design-without deviating from standard JuMP modeling practices and while retaining access to a broad ecosystem of solvers.
Problem

Research questions and friction points this paper is trying to address.

Unifying modeling and differentiation for constrained optimization problems
Computing sensitivities through KKT systems for nonconvex programs
Providing parameter-centric derivatives without manual bookkeeping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies modeling and differentiation in Julia optimization stack
Computes forward and reverse-mode sensitivities via KKT system
Provides parameter-centric API for direct derivative computation
šŸ”Ž Similar Papers
No similar papers found.