🤖 AI Summary
Streamlining constraints in Constraint Optimization Problems (COPs) rely heavily on manual, problem-specific design, suffering from poor generalizability and scalability.
Method: We propose the first LLM-based approach for automatically generating MiniZinc streamlining constraints. Our method integrates prompt engineering, lightweight empirical feedback loops, and test-driven iterative validation to dynamically identify high-performing constraint combinations—mitigating memorization bias and ensuring offline runtime independence and cross-problem generalizability.
Contribution/Results: Evaluated on seven representative COP classes, our generated constraints significantly reduce search space size and outperform both human-crafted and systematically constructed baselines in solving speed. Ablation studies—including on adversarial “confused” and “camouflaged” benchmarks—demonstrate robustness and transferability. This work pioneers the use of LLMs for creative, data-efficient streamlining constraint synthesis and establishes a verifiable, reproducible empirical optimization paradigm grounded in rigorous testing and feedback.
📝 Abstract
Streamlining constraints (or streamliners, for short) narrow the search space, enhancing the speed and feasibility of solving complex constraint satisfaction problems. Traditionally, streamliners were crafted manually or generated through systematically combined atomic constraints with high-effort offline testing. Our approach utilizes the creativity of Large Language Models (LLMs) to propose effective streamliners for problems specified in the MiniZinc constraint programming language and integrates feedback to the LLM with quick empirical tests for validation. Evaluated across seven diverse constraint satisfaction problems, our method achieves substantial runtime reductions. We compare the results to obfuscated and disguised variants of the problem to see whether the results depend on LLM memorization. We also analyze whether longer off-line runs improve the quality of streamliners and whether the LLM can propose good combinations of streamliners.