🤖 AI Summary
Large language models (LLMs) often deviate from task objectives and generate redundant or erroneous outputs in high-logical-complexity, strongly constrained tasks. To address this, we propose the cognitive-inspired Fast-Slow-Thinking (FST) two-stage task decomposition framework. In the “Fast Thinking” stage, constraints are relaxed to enable coarse-grained task abstraction; in the “Slow Thinking” stage, explicit constraint modeling and iterative refinement ensure fine-grained output correction. FST integrates a prompt-engineering-driven dual-phase reasoning architecture with techniques for task abstraction and structured constraint representation. Experiments across three complex task categories demonstrate that FST reduces average error rates by 37% and redundant content by 52% compared to baseline methods, significantly improving solution accuracy and logical consistency. By enabling interpretable, traceable reasoning paths, FST establishes a novel paradigm for LLM-based task decomposition.
📝 Abstract
Nowadays, Large Language Models (LLMs) have been gradually employed to solve complex tasks. To face the challenge, task decomposition has become an effective way, which proposes to divide a complex task into multiple simpler subtasks and then solve them separately so that the difficulty of the original task can be reduced. However, the performance of existing task decomposition methods can be suboptimal when the task contains overly complex logic and constraints. In this situation, the solution generated by LLMs may deviate from the original purpose of the task, or contain redundant or even erroneous content. Therefore, inspired by the fact that humans possess two thinking systems including fast thinking and slow thinking, this paper introduces a new task decomposition method termed ``Fast-Slow-Thinking'' (FST), which stimulates LLMs to solve tasks through the cooperation of Fast Thinking (FT) and Slow Thinking (ST) steps. Here FT focuses more on the general and concise aspect of the task, and ST focuses more on the details of the task. In FT, LLMs are prompted to remove the constraints of the original task, therefore simplifying it to a general and concise one. In ST, we recall the constraints removed in FT, so that LLMs can improve the answer generated in FT to meet the requirements of the original task. Therefore, our FST method enables LLMs to consider a complex problem via a human-like cognition process from coarse to fine, the effectiveness of which has been well demonstrated by the experiments on three types of tasks.