LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically identifies three fundamental causes underlying large language models’ (LLMs) poor performance in sequential decision-making tasks: greedy bias, frequency bias, and the knowing-doing gap. To address these limitations, we propose a reinforcement learning fine-tuning framework grounded in self-generated chain-of-thought (CoT) reasoning. The framework integrates Proximal Policy Optimization (PPO), ε-greedy exploration, self-correction, and self-consistency mechanisms to explicitly bridge knowledge representation and action execution. Crucially, our approach enables the first quantitative identification and disentanglement of all three failure modes. Empirical evaluation across multi-armed bandits, contextual bandits, and tic-tac-toe demonstrates significant improvements in exploration rate and optimal action selection accuracy—effectively closing the knowing-doing gap. The method advances LLM-based decision-making by unifying symbolic reasoning with policy optimization, offering both diagnostic interpretability and actionable performance gains.

Technology Category

Application Category

📝 Abstract
The success of Large Language Models (LLMs) has sparked interest in various agentic applications. A key hypothesis is that LLMs, leveraging common sense and Chain-of-Thought (CoT) reasoning, can effectively explore and efficiently solve complex domains. However, LLM agents have been found to suffer from sub-optimal exploration and the knowing-doing gap, the inability to effectively act on knowledge present in the model. In this work, we systematically study why LLMs perform sub-optimally in decision-making scenarios. In particular, we closely examine three prevalent failure modes: greediness, frequency bias, and the knowing-doing gap. We propose mitigation of these shortcomings by fine-tuning via Reinforcement Learning (RL) on self-generated CoT rationales. Our experiments across multi-armed bandits, contextual bandits, and Tic-tac-toe, demonstrate that RL fine-tuning enhances the decision-making abilities of LLMs by increasing exploration and narrowing the knowing-doing gap. Finally, we study both classic exploration mechanisms, such as $epsilon$-greedy, and LLM-specific approaches, such as self-correction and self-consistency, to enable more effective fine-tuning of LLMs for decision-making.
Problem

Research questions and friction points this paper is trying to address.

LLMs exhibit sub-optimal exploration in decision-making
LLMs suffer from greediness and frequency bias
LLMs struggle with the knowing-doing gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL fine-tuning on self-generated CoT rationales
Enhancing exploration and narrowing knowing-doing gap
Combining classic and LLM-specific exploration mechanisms
🔎 Similar Papers
No similar papers found.