🤖 AI Summary
This work investigates the reasoning reliability of large language models (LLMs) on the highly constrained NP-complete Resource-Constrained Project Scheduling Problem (RCPSP). We propose R-ConstraintBench, a novel evaluation framework that generates synthetic RCPSP instances of controllable difficulty using directed acyclic graphs, systematically incorporating non-redundant precedence, downtime, time-window, and mutual-exclusion constraints. Our feasibility analysis and error-mode diagnosis reveal that constraint interaction—not individual constraint complexity—is the primary bottleneck causing LLM failure. Experiments across diverse LLMs show near-optimal performance under single precedence constraints but sharp feasibility degradation under composite constraints; moreover, strong synthetic-data performance fails to generalize to real-world domain instances. This is the first study to quantitatively characterize the detrimental impact of constraint interaction on LLM reasoning for scheduling. The work establishes a scalable, diagnostic benchmark and methodology for developing trustworthy AI-driven scheduling systems.
📝 Abstract
Effective scheduling under tight resource, timing, and operational constraints underpins large-scale planning across sectors such as capital projects, manufacturing, logistics, and IT fleet transitions. However, the reliability of large language models (LLMs) when reasoning under high-constraint regimes is insufficiently characterized. To address this gap, we present R-ConstraintBench, a scalable framework that evaluates models on Resource-Constrained Project Scheduling Problems (RCPSP), an NP-Complete feasibility class, while difficulty increases via linear growth in constraints. R-ConstraintBench incrementally increases non-redundant precedence constraints in Directed Acyclic Graphs (DAGs) and then introduces downtime, temporal windows, and disjunctive constraints. As an illustrative example, we instantiate the benchmark in a data center migration setting and evaluate multiple LLMs using feasibility and error analysis, identifying degradation thresholds and constraint types most associated with failure. Empirically, strong models are near-ceiling on precedence-only DAGs, but feasibility performance collapses when downtime, temporal windows, and disjunctive constraints interact, implicating constraint interaction, not graph depth, as the principal bottleneck. Performance on clean synthetic ramps also does not guarantee transfer to domain-grounded scenarios, underscoring limited generalization.