🤖 AI Summary
To address the computational intractability and low efficiency of solving large-scale mixed-integer nonlinear programming (MINLP) problems, this paper proposes the first general-purpose learning-based optimization framework. Methodologically, it introduces a differentiable bilevel integer correction mechanism that enables end-to-end learnable optimization within the continuous relaxation space, integrated with projection-based integer constraint enforcement and heuristic feasibility-enhancing post-processing. The key contribution is the first fully end-to-end learnable solver for MINLP, overcoming dual bottlenecks—scalability and speed—that limit conventional solvers. Experiments demonstrate that the framework achieves millisecond-level solution times on MINLP instances with ten-thousand variables, consistently delivering higher-quality solutions than state-of-the-art commercial solvers (e.g., Gurobi, BARON) and classical heuristics. Notably, it successfully solves the largest publicly available MINLP benchmark instance to date.
📝 Abstract
Mixed-integer nonlinear programs (MINLPs) arise in diverse domains such as energy systems and transportation but are notoriously difficult to solve, particularly on a large scale. While learning-to-optimize methods have been successful at continuous optimization, extending them to MINLPs is still challenging due to the integer constraints. To overcome this, we propose a novel deep-learning approach with two learnable correction layers to ensure solution integrality and a post-processing step to improve solution feasibility. Our experiments show that this is the first general method capable of efficiently solving large-scale MINLPs with up to tens of thousands of variables in milliseconds, delivering high-quality solutions even when traditional solvers and heuristics fail. This is the first general learning method for MINLP, successfully solving some of the largest instances reported to date. Our code is available at https://github.com/pnnl/L2O-pMINLP.