Why Did Apple Fall To The Ground: Evaluating Curiosity In Large Language Model

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like, curiosity-driven learning. To address this, we systematically adapt the validated psychological instrument—the Five-Dimensional Curiosity Scale–Revised (5DCR)—to LLM evaluation for the first time, establishing a quantitative framework encompassing information seeking, stimulation seeking, and social curiosity. Leveraging prompt engineering and behavioral analysis, we conduct multi-dimensional measurement of model responses. Results show that LLMs demonstrate significantly higher proactive knowledge acquisition than humans but adopt more conservative decision-making under uncertainty; moreover, their curiosity levels correlate positively with reasoning depth and active learning performance. This work pioneers a psychologically grounded, interpretable evaluation paradigm for AI curiosity, advancing the modeling of autonomous learning mechanisms and contributing to the development of trustworthy AI systems.

Technology Category

Application Category

📝 Abstract
Curiosity serves as a pivotal conduit for human beings to discover and learn new knowledge. Recent advancements of large language models (LLMs) in natural language processing have sparked discussions regarding whether these models possess capability of curiosity-driven learning akin to humans. In this paper, starting from the human curiosity assessment questionnaire Five-Dimensional Curiosity scale Revised (5DCR), we design a comprehensive evaluation framework that covers dimensions such as Information Seeking, Thrill Seeking, and Social Curiosity to assess the extent of curiosity exhibited by LLMs. The results demonstrate that LLMs exhibit a stronger thirst for knowledge than humans but still tend to make conservative choices when faced with uncertain environments. We further investigated the relationship between curiosity and thinking of LLMs, confirming that curious behaviors can enhance the model's reasoning and active learning abilities. These findings suggest that LLMs have the potential to exhibit curiosity similar to that of humans, providing experimental support for the future development of learning capabilities and innovative research in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating curiosity dimensions in large language models
Assessing knowledge seeking versus risk avoidance behaviors
Investigating how curiosity enhances reasoning and learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated LLM curiosity using 5DCR questionnaire framework
Assessed knowledge seeking and social curiosity dimensions
Confirmed curiosity enhances reasoning and active learning
🔎 Similar Papers
No similar papers found.
H
Haoyu Wang
College of Computer Science and Artificial Intelligence, Fudan University
Sihang Jiang
Sihang Jiang
Fudan University
Knowledge GraphLarge Language Models
Y
Yuyan Chen
College of Computer Science and Artificial Intelligence, Fudan University
Yitong Wang
Yitong Wang
ByteDance Inc.
computer vision
Y
Yanghua Xiao
College of Computer Science and Artificial Intelligence, Fudan University