Enhancing LLM Reasoning with Reward-guided Tree Search

📅 2024-11-18
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the low-quality chain-of-thought (CoT) generation and inefficient search in complex mathematical reasoning by large language models (LLMs), this paper proposes STILL-1: a unified framework integrating strategy modeling (via LLM), lightweight reward modeling, and an enhanced Monte Carlo Tree Search (MCTS). STILL-1 is the first end-to-end trainable dynamic thinking-tree expansion method guided by learned rewards. Its core innovation lies in a reward-driven search mechanism that operates without human-annotated CoT data, coupled with dynamic pruning to significantly improve the efficiency of reasoning path optimization. Evaluated on four challenging mathematical reasoning benchmarks, STILL-1 achieves an average accuracy gain of 12.3% over strong baselines, demonstrating both the effectiveness and generalizability of reward-guided search for complex deductive reasoning tasks.

Technology Category

Application Category

📝 Abstract
Recently, test-time scaling has garnered significant attention from the research community, largely due to the substantial advancements of the o1 model released by OpenAI. By allocating more computational resources during the inference phase, large language models~(LLMs) can extensively explore the solution space by generating more thought tokens or diverse solutions, thereby producing more accurate responses. However, developing an o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research. In this paper, we present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms. This framework is implemented by integrating the policy model, reward model, and search algorithm. It is primarily constructed around a tree search algorithm, where the policy model navigates a dynamically expanding tree guided by a specially trained reward model. The implemented framework is denoted as extbf{STILL-1}. We thoroughly explore various design considerations necessary for implementing this framework and provide a detailed report of the technical aspects. To assess the effectiveness of our approach, we focus on mathematical reasoning tasks and conduct extensive evaluations on four challenging datasets, significantly enhancing the reasoning abilities of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Mathematical Problem Solving
Reasoning Ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

STILL-1 method
Reward-guided tree search algorithm
Enhanced reasoning capability
🔎 Similar Papers
No similar papers found.
Jinhao Jiang
Jinhao Jiang
Phd student of CS, Renmin University of China
NLPLLMsComplex ReasoningAgent
Z
Zhipeng Chen
Gaoling School of Artificial Intelligence, Renmin University of China
Yingqian Min
Yingqian Min
Renmin University of China
Natural Language ProcessingLarge Language Models
J
Jie Chen
Gaoling School of Artificial Intelligence, Renmin University of China
Xiaoxue Cheng
Xiaoxue Cheng
Renmin University of China
Jiapeng Wang
Jiapeng Wang
South China University of Technology
document understandingvisual information extractionmulti-modal learningCLIPLLM
Y
Yiru Tang
Gaoling School of Artificial Intelligence, Renmin University of China
H
Haoxiang Sun
School of Information, Renmin University of China
J
Jia Deng
Gaoling School of Artificial Intelligence, Renmin University of China
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
Z
Zheng Liu
BAAI
Dong Yan
Dong Yan
AI Chief Expert, Bosch.
Reinforcement LearningFoundation Model
J
Jian Xie
Baichuan AI
Z
Zhongyuan Wang
BAAI
Ji-Rong Wen
Ji-Rong Wen
Gaoling School of Artificial Intelligence, Renmin University of China
Large Language ModelWeb SearchInformation RetrievalMachine Learning