Efficient End-to-End Learning for Decision-Making: A Meta-Optimization Approach

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In end-to-end learning, the inner-loop optimization—repeatedly solving expensive constrained problems—incurs prohibitive computational overhead during training. To address this, we propose a meta-optimization framework that learns lightweight, differentiable neural optimization algorithms to replace traditional iterative solvers. Our core contribution is a novel neural architecture grounded in alternating projections, which rigorously enforces feasibility constraints; we provide theoretical guarantees on its exponential convergence rate, approximation accuracy, and generalization error. The framework unifies treatment of both deterministic and two-stage stochastic optimization. Evaluated on real-world applications—including power dispatch, terrain-aware path planning, and multi-warehouse newsvendor problems—our method accelerates training by factors of several to over an order of magnitude, achieves superior scalability compared to existing end-to-end approaches, and maintains decision quality without degradation.

Technology Category

Application Category

📝 Abstract
End-to-end learning has become a widely applicable and studied problem in training predictive ML models to be aware of their impact on downstream decision-making tasks. These end-to-end models often outperform traditional methods that separate training from the optimization and only myopically focus on prediction error. However, the computational complexity of end-to-end frameworks poses a significant challenge, particularly for large-scale problems. While training an ML model using gradient descent, each time we need to compute a gradient we must solve an expensive optimization problem. We present a meta-optimization method that learns efficient algorithms to approximate optimization problems, dramatically reducing computational overhead of solving the decision problem in general, an aspect we leverage in the training within the end-to-end framework. Our approach introduces a neural network architecture that near-optimally solves optimization problems while ensuring feasibility constraints through alternate projections. We prove exponential convergence, approximation guarantees, and generalization bounds for our learning method. This method offers superior computational efficiency, producing high-quality approximations faster and scaling better with problem size compared to existing techniques. Our approach applies to a wide range of optimization problems including deterministic, single-stage as well as two-stage stochastic optimization problems. We illustrate how our proposed method applies to (1) an electricity generation problem using real data from an electricity routing company coordinating the movement of electricity throughout 13 states, (2) a shortest path problem with a computer vision task of predicting edge costs from terrain maps, (3) a two-stage multi-warehouse cross-fulfillment newsvendor problem, as well as a variety of other newsvendor-like problems.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity in end-to-end decision-making learning
Developing efficient meta-optimization for large-scale optimization problems
Ensuring feasibility and scalability in neural network-based optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-optimization method reduces computational overhead
Neural network ensures feasibility via alternate projections
Applies to deterministic and stochastic optimization problems
🔎 Similar Papers
No similar papers found.