Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end reinforcement learning methods for LLMs invoking external retrievers to solve complex problems lack explicit supervision over the reasoning process, resulting in insufficient logical coherence and rigor. This paper proposes a supervisable and verifiable hierarchical reasoning framework: it decomposes problems into dependency-ordered subproblems via multi-turn interaction, integrates dual-modal representations—natural language and logical functions—and introduces a knowledge boundary determination mechanism to suppress redundant retrieval while explicitly modeling inter-subproblem dependencies. The framework supports hybrid retrieval from both knowledge bases and the web, substantially enhancing reasoning controllability and interpretability. It achieves baseline performance with only hundreds of training samples; after full-scale training, it consistently outperforms state-of-the-art methods across multiple datasets and LLMs of varying scales.

Technology Category

Application Category

📝 Abstract
Efficient retrieval of external knowledge bases and web pages is crucial for enhancing the reasoning abilities of LLMs. Previous works on training LLMs to leverage external retrievers for solving complex problems have predominantly employed end-to-end reinforcement learning. However, these approaches neglect supervision over the reasoning process, making it difficult to guarantee logical coherence and rigor. To address these limitations, we propose Thinker, a hierarchical thinking model for deep search through multi-turn interaction, making the reasoning process supervisable and verifiable. It decomposes complex problems into independently solvable sub-problems, each dually represented in both natural language and an equivalent logical function to support knowledge base and web searches. Concurrently, dependencies between sub-problems are passed as parameters via these logical functions, enhancing the logical coherence of the problem-solving process. To avoid unnecessary external searches, we perform knowledge boundary determination to check if a sub-problem is within the LLM's intrinsic knowledge, allowing it to answer directly. Experimental results indicate that with as few as several hundred training samples, the performance of Thinker is competitive with established baselines. Furthermore, when scaled to the full training set, Thinker significantly outperforms these methods across various datasets and model sizes. The source code is available at https://github.com/OpenSPG/KAG-Thinker.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning through hierarchical decomposition of complex problems
Ensuring logical coherence via dual representation of sub-problems
Optimizing external searches by determining LLM knowledge boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical thinking model for deep search
Dual representation of sub-problems in language and logic
Knowledge boundary determination to avoid unnecessary searches
🔎 Similar Papers
No similar papers found.
J
Jun Xu
Ant Group, Hangzhou, China
X
Xinkai Du
Ant Group, Hangzhou, China
Y
Yu Ao
Ant Group, Hangzhou, China
P
Peilong Zhao
Ant Group, Hangzhou, China
Y
Yang Li
Ant Group, Hangzhou, China
L
Ling Zhong
Ant Group, Hangzhou, China
L
Lin Yuan
Ant Group, Hangzhou, China
Z
Zhongpu Bo
Ant Group, Hangzhou, China
Xiaorui Wang
Xiaorui Wang
Professor of Computer Engineering, The Ohio State University
Power ManagementData CentersReal-Time Embedded SystemsComputer ArchitectureComputer Systems
Mengshu Sun
Mengshu Sun
Beijing University of Technology
Deep LearningModel Compression and Acceleration
Z
Zhengke Gui
Ant Group, Hangzhou, China
D
Dalong Zhang
Ant Group, Hangzhou, China
Zhaoyang Wang
Zhaoyang Wang
University of North Carolina at Chapel Hill
NLPLLM AlignmentLLM Reasoning
Qiwei Wang
Qiwei Wang
ShanghaiTech University
computer vision
Yangyang Hou
Yangyang Hou
Ant Group, Hangzhou, China
Z
Zhiying Yin
Ant Group, Hangzhou, China
Haofen Wang
Haofen Wang
Tongji University
Knowledge GraphNatural Language ProcessingRetrieval Augmented Generation
H
Huajun Chen
Zhejiang University, Hangzhou, China
Lei Liang
Lei Liang
Ant Group
Knowledge GraphAI
J
Jun Zhou
Ant Group, Hangzhou, China