Infeasibility Aware Large Language Models for Combinatorial Optimization

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current large language models (LLMs) struggle to explicitly identify infeasible instances in combinatorial optimization. The authors propose a unified framework integrating feasible solution generation with infeasibility detection: first, provably correct infeasibility labels are derived from exact mathematical programming to construct high-quality supervised data; then, an 8B-parameter LLM is fine-tuned using this data, and its outputs serve as warm starts for local search. Experimental results demonstrate that the proposed approach achieves up to a 30% higher accuracy compared to GPT-5.2 and accelerates downstream search by a factor of two through LLM-guided warm starts, substantially enhancing both solution efficiency and reliability.
📝 Abstract
Large language models (LLMs) are increasingly explored for NP-hard combinatorial optimization problems, but most existing methods emphasize feasible-instance solution generation and do not explicitly address infeasibility detection. We propose an infeasibility-aware framework that combines certifiable dataset construction, supervised fine-tuning, and LLM-assisted downstream search. For the minor-embedding problem, we introduce a new mathematical programming formulation together with provable zero-phase infeasibility screening, which enables scalable construction of training instances labeled either as feasible with structured certificates or as certifiably infeasible. Using training data generated through this exact optimization pipeline, we show that an 8B-parameter LLM can be fine-tuned to jointly perform solution generation and infeasibility detection. We further utilize LLM outputs as warm starts for downstream local search, providing a practical way to accelerate optimization even when the LLM outputs are imperfect. Experiments show that our fine-tuned model improves overall accuracy by up to 30\% over GPT-5.2; meanwhile LLM-guided warm starts provide up to $2\times$ speedup compared with starting from scratch in downstream local search.
Problem

Research questions and friction points this paper is trying to address.

infeasibility detection
combinatorial optimization
large language models
NP-hard problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

infeasibility-aware
combinatorial optimization
supervised fine-tuning
certifiable dataset
warm start
🔎 Similar Papers
No similar papers found.