🤖 AI Summary
Despite growing interest in applying large language models (LLMs) to combinatorial optimization (CO), the field lacks a systematic synthesis of applications, challenges, and methodological gaps. Method: Following the PRISMA guidelines, we systematically identified and analyzed 103 studies from Scopus and Google Scholar, establishing the first four-dimensional taxonomy—spanning task types, model architectures, domain-specific evaluation benchmarks, and application scenarios—and employed semantic analysis and topic modeling to characterize LLM usage in solution generation, representation learning, and constraint modeling. Contribution/Results: We identify critical limitations including data scarcity, poor generalization across problem classes, and insufficient interpretability. This work delivers the first structured knowledge graph and methodological framework for the LLM–CO intersection, and proposes concrete future directions: scalable prompt engineering, domain-adaptive pretraining, and neuro-symbolic hybrid modeling.
📝 Abstract
This systematic review explores the application of Large Language Models (LLMs) in Combinatorial Optimization (CO). We report our findings using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We conduct a literature search via Scopus and Google Scholar, examining over 2,000 publications. We assess publications against four inclusion and four exclusion criteria related to their language, research focus, publication year, and type. Eventually, we select 103 studies. We classify these studies into semantic categories and topics to provide a comprehensive overview of the field, including the tasks performed by LLMs, the architectures of LLMs, the existing datasets specifically designed for evaluating LLMs in CO, and the field of application. Finally, we identify future directions for leveraging LLMs in this field.