Large Language Models Think Too Fast To Explore Effectively

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit suboptimal performance in open-ended exploration tasks—e.g., Little Alchemy 2—due to premature decision-making and inefficient exploration, rooted in an intrinsic misalignment between uncertainty and empowerment signals in their internal representations. Method: We introduce an exploration evaluation framework built on Little Alchemy 2, integrating sparse autoencoder (SAE)-based representational analysis with inter-layer activation tracing in Transformers to characterize exploration dynamics. Contribution/Results: We identify “empowerment-perception latency” as a novel theoretical bottleneck: uncertainty signals emerge earlier than empowerment-relevant representations, causing LLMs to act before sufficient exploratory value is computed. Empirical analysis shows most LLMs—except o1—explore significantly less efficiently than humans and rely solely on uncertainty-driven strategies. In contrast, humans jointly leverage uncertainty and empowerment signals for effective exploration. Our work provides both an interpretable mechanistic account of LLM exploration limitations and a principled evaluation paradigm for autonomous discovery.

Technology Category

Application Category

📝 Abstract
Large Language Models have emerged many intellectual capacities. While numerous benchmarks assess their intelligence, limited attention has been given to their ability to explore, an essential capacity for discovering new information and adapting to novel environments in both natural and artificial systems. The extent to which LLMs can effectively explore, particularly in open-ended tasks, remains unclear. This study investigates whether LLMs can surpass humans in exploration during an open-ended task, using Little Alchemy 2 as a paradigm, where agents combine elements to discover new ones. Results show most LLMs underperform compared to humans, except for the o1 model, with those traditional LLMs relying primarily on uncertainty driven strategies, unlike humans who balance uncertainty and empowerment. Representational analysis of the models with Sparse Autoencoders revealed that uncertainty and choices are represented at earlier transformer blocks, while empowerment values are processed later, causing LLMs to think too fast and make premature decisions, hindering effective exploration. These findings shed light on the limitations of LLM exploration and suggest directions for improving their adaptability.
Problem

Research questions and friction points this paper is trying to address.

Exploratory Learning
Adaptability
Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Little Alchemy 2
open-ended exploration
adaptive flexibility
🔎 Similar Papers
No similar papers found.
L
Lan Pan
School of Psychology, Georgia Institute of Technology, Atlanta, USA
Hanbo Xie
Hanbo Xie
PhD student, Georgia Institute of Technology
reinforcement learningdecision-makinglarge language modelscomputational cognitive science
R
Robert C. Wilson
School of Psychology, Georgia Institute of Technology, Atlanta, USA