🤖 AI Summary
Large language models (LLMs) struggle to simultaneously capture fine-grained semantic constraints and ensure global combinatorial feasibility in optimization tasks.
Method: We propose an algorithm-feedback-driven iterative fine-tuning framework that reformulates LLM output distribution optimization as a simulated annealing process, grounded in a novel “coarse-grained learnability” assumption. The method tightly integrates LLMs’ natural language constraint parsing capability with the global rigor of classical combinatorial algorithms—including scheduling, graph connectivity, and clustering—to jointly model semantic constraints and combinatorial feasibility.
Contribution/Results: We establish the first sample complexity bound guaranteeing convergence under this framework. Experiments across diverse combinatorial optimization tasks demonstrate substantial improvements over baseline sampling strategies: our approach strictly enforces global solution feasibility and quality while preserving the expressiveness and flexibility of natural language input.
📝 Abstract
We present a novel way to integrate flexible, context-dependent constraints into combinatorial optimization by leveraging Large Language Models (LLMs) alongside traditional algorithms. Although LLMs excel at interpreting nuanced, locally specified requirements, they struggle with enforcing global combinatorial feasibility. To bridge this gap, we propose an iterated fine-tuning framework where algorithmic feedback progressively refines the LLM's output distribution. Interpreting this as simulated annealing, we introduce a formal model based on a"coarse learnability"assumption, providing sample complexity bounds for convergence. Empirical evaluations on scheduling, graph connectivity, and clustering tasks demonstrate that our framework balances the flexibility of locally expressed constraints with rigorous global optimization more effectively compared to baseline sampling methods. Our results highlight a promising direction for hybrid AI-driven combinatorial reasoning.