Fast-Slow-Thinking: Complex Task Solving with Large Language Models

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often deviate from task objectives and generate redundant or erroneous outputs in high-logical-complexity, strongly constrained tasks. To address this, we propose the cognitive-inspired Fast-Slow-Thinking (FST) two-stage task decomposition framework. In the “Fast Thinking” stage, constraints are relaxed to enable coarse-grained task abstraction; in the “Slow Thinking” stage, explicit constraint modeling and iterative refinement ensure fine-grained output correction. FST integrates a prompt-engineering-driven dual-phase reasoning architecture with techniques for task abstraction and structured constraint representation. Experiments across three complex task categories demonstrate that FST reduces average error rates by 37% and redundant content by 52% compared to baseline methods, significantly improving solution accuracy and logical consistency. By enabling interpretable, traceable reasoning paths, FST establishes a novel paradigm for LLM-based task decomposition.

Technology Category

Application Category

📝 Abstract
Nowadays, Large Language Models (LLMs) have been gradually employed to solve complex tasks. To face the challenge, task decomposition has become an effective way, which proposes to divide a complex task into multiple simpler subtasks and then solve them separately so that the difficulty of the original task can be reduced. However, the performance of existing task decomposition methods can be suboptimal when the task contains overly complex logic and constraints. In this situation, the solution generated by LLMs may deviate from the original purpose of the task, or contain redundant or even erroneous content. Therefore, inspired by the fact that humans possess two thinking systems including fast thinking and slow thinking, this paper introduces a new task decomposition method termed ``Fast-Slow-Thinking'' (FST), which stimulates LLMs to solve tasks through the cooperation of Fast Thinking (FT) and Slow Thinking (ST) steps. Here FT focuses more on the general and concise aspect of the task, and ST focuses more on the details of the task. In FT, LLMs are prompted to remove the constraints of the original task, therefore simplifying it to a general and concise one. In ST, we recall the constraints removed in FT, so that LLMs can improve the answer generated in FT to meet the requirements of the original task. Therefore, our FST method enables LLMs to consider a complex problem via a human-like cognition process from coarse to fine, the effectiveness of which has been well demonstrated by the experiments on three types of tasks.
Problem

Research questions and friction points this paper is trying to address.

Improves complex task decomposition for Large Language Models
Addresses suboptimal performance in logic-heavy constrained tasks
Proposes Fast-Slow-Thinking to mimic human cognition in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fast-Slow-Thinking for task decomposition
FT simplifies tasks by removing constraints
ST refines answers by recalling constraints
🔎 Similar Papers
No similar papers found.
Y
Yiliu Sun
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, P.R. China; Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education; Jiangsu Key Laboratory of Image and Video Understanding for Social Security
Y
Yanfang Zhang
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, P.R. China; Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education; Jiangsu Key Laboratory of Image and Video Understanding for Social Security
Zicheng Zhao
Zicheng Zhao
Nanjing University of Science and Technology
Knowledge GraphLarge Language ModelFew-shot LearningSemi-Supervised Learning
Sheng Wan
Sheng Wan
Nanjing University of Science and Technology
machine learninghyperspectral image classification
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining
C
Chen Gong
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, P.R. China; Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education; Jiangsu Key Laboratory of Image and Video Understanding for Social Security