🤖 AI Summary
This study introduces, for the first time, the psychological construct of motivational systems from human psychology into large language model (LLM) research to investigate whether LLMs exhibit human-like motivational mechanisms and how these influence decision-making, goal setting, and task performance. Combining behavioral experiments, self-report analyses, and external intervention manipulations, the research demonstrates that the motivations reported by LLMs are structured and modifiable, and closely align with their actual effort expenditure and task performance. These findings reveal that motivation, as an organizational construct, systematically shapes LLM behavior, offering a novel theoretical perspective and methodological foundation for understanding and regulating the intrinsic drivers of large language models.
📝 Abstract
Motivation is a central driver of human behavior, shaping decisions, goals, and task performance. As large language models (LLMs) become increasingly aligned with human preferences, we ask whether they exhibit something akin to motivation. We examine whether LLMs "report" varying levels of motivation, how these reports relate to their behavior, and whether external factors can influence them. Our experiments reveal consistent and structured patterns that echo human psychology: self-reported motivation aligns with different behavioral signatures, varies across task types, and can be modulated by external manipulations. These findings demonstrate that motivation is a coherent organizing construct for LLM behavior, systematically linking reports, choices, effort, and performance, and revealing motivational dynamics that resemble those documented in human psychology. This perspective deepens our understanding of model behavior and its connection to human-inspired concepts.