🤖 AI Summary
This work addresses the challenge of dynamically coordinating and user-controlling intuitive (System-1) and search-based (System-2) reasoning in long-horizon planning with large language models (LLMs). We propose the first tunable hybrid planning framework, featuring an end-to-end differentiable controller that automatically routes subgoals to either a fast, heuristic-driven System-1 module or a slower, search-intensive System-2 module based on estimated subgoal difficulty. Crucially, users can continuously adjust the System-1/System-2 balance via a scalar mixing factor. The framework is trained jointly on a single LLM using only search-trajectory supervision and natively supports neuro-symbolic hybrid architectures. Evaluated on Maze Navigation and Blocksworld, it substantially outperforms pure System-1 baselines, System-2 approximations of A*, and classical symbolic A* planners. Results demonstrate three core advantages: explicit user controllability, seamless neuro-symbolic flexibility, and strong generalization across diverse planning algorithms.
📝 Abstract
Language models can be used to solve long-horizon planning problems in two distinct modes: a fast 'System-1' mode, directly generating plans without any explicit search or backtracking, and a slow 'System-2' mode, planning step-by-step by explicitly searching over possible actions. While System-2 is typically more effective, it is also more computationally expensive, making it infeasible for long plans or large action spaces. Moreover, isolated System-1 or 2 ignores the user's end goals, failing to provide ways to control the model's behavior. To this end, we propose the System-1.x Planner, a controllable planning framework with LLMs that is capable of generating hybrid plans and balancing between the two planning modes based on the difficulty of the problem at hand. System-1.x consists of (i) a controller, (ii) a System-1 Planner, and (iii) a System-2 Planner. Based on a user-specified hybridization factor (x) governing the mixture between System-1 and 2, the controller decomposes a problem into sub-goals, and classifies them as easy or hard to be solved by either System-1 or 2, respectively. We fine-tune all three components on top of a single base LLM, requiring only search traces as supervision. Experiments with two diverse planning tasks -- Maze Navigation and Blocksworld -- show that our System-1.x Planner outperforms a System-1 Planner, a System-2 Planner trained to approximate A* search, and also a symbolic planner (A*). We demonstrate the following key properties of our planner: (1) controllability: increasing the hybridization factor (e.g., System-1.75 vs 1.5) performs more search, improving performance, (2) flexibility: by building a neuro-symbolic variant with a neural System-1 and a symbolic System-2, we can use existing symbolic methods, and (3) generalizability: by being able to learn from different search algorithms, our method is robust to the choice of search algorithm.