A Fully First-Order Layer for Differentiable Optimization

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional differentiable optimization layers rely on implicit differentiation, requiring the solution of Hessian-involved linear systems—entailing prohibitive computational and memory costs. This paper proposes FFOLayer, a fully first-order differentiable optimization layer: it reformulates embedded optimization as a bilevel problem and leverages active-set identification coupled with a Lagrangian hypergradient mechanism to achieve finite-time, non-asymptotic hypergradient approximation using only first-order information. Theoretically, FFOLayer attains a convergence rate matching that of optimal nonsmooth nonconvex methods, with overall complexity $ ilde{mathcal{O}}(delta^{-1}epsilon^{-3})$. Crucially, it eliminates Hessian computation and storage, drastically reducing overhead. Furthermore, we release an open-source, plug-and-play Python library supporting seamless integration with mainstream deep learning frameworks and optimization solvers.

Technology Category

Application Category

📝 Abstract
Differentiable optimization layers enable learning systems to make decisions by solving embedded optimization problems. However, computing gradients via implicit differentiation requires solving a linear system with Hessian terms, which is both compute- and memory-intensive. To address this challenge, we propose a novel algorithm that computes the gradient using only first-order information. The key insight is to rewrite the differentiable optimization as a bilevel optimization problem and leverage recent advances in bilevel methods. Specifically, we introduce an active-set Lagrangian hypergradient oracle that avoids Hessian evaluations and provides finite-time, non-asymptotic approximation guarantees. We show that an approximate hypergradient can be computed using only first-order information in $ ilde{oo}(1)$ time, leading to an overall complexity of $ ilde{oo}(delta^{-1}epsilon^{-3})$ for constrained bilevel optimization, which matches the best known rate for non-smooth non-convex optimization. Furthermore, we release an open-source Python library that can be easily adapted from existing solvers. Our code is available here: https://github.com/guaguakai/FFOLayer.
Problem

Research questions and friction points this paper is trying to address.

Develop a first-order gradient computation method for differentiable optimization layers
Avoid Hessian evaluations to reduce computational and memory costs
Provide finite-time approximation guarantees for constrained bilevel optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-order gradient computation without Hessian
Active-set Lagrangian hypergradient oracle for efficiency
Open-source Python library for easy adaptation
Z
Zihao Zhao
Georgia Institute of Technology
K
Kai-Chia Mo
Two Sigma
Shing-Hei Ho
Shing-Hei Ho
Georgia Tech
machine learningoptimizationrobot learning
Brandon Amos
Brandon Amos
Meta
machine learningoptimizationdeep learning
K
Kai Wang
Georgia Institute of Technology