🤖 AI Summary
This study investigates cognitive disparities between humans and large language models (LLMs) in task generation. We conduct controlled experiments comparing human participants with a GPT-4o–based generative agent, employing validated psychological scales (e.g., Openness) to quantify the influence of values and cognitive styles. Results reveal that human-generated tasks are significantly more social, embodied, and value-directed; by contrast, even when explicitly prompted with psychological traits, LLM-generated tasks remain overly abstract, lacking concrete object interaction and socio-contextual grounding. This work provides the first systematic evidence of fundamental deficits in LLMs’ modeling of intrinsic motivation and embodied cognition—demonstrating that their goal generation is not genuinely human-like: linguistic fluency does not entail value-driven, context-sensitive objective construction. We argue for integrating intrinsic motivation mechanisms and physical/social embedding into the design of next-generation intelligent agents.
📝 Abstract
Humans constantly generate a diverse range of tasks guided by internal motivations. While generative agents powered by large language models (LLMs) aim to simulate this complex behavior, it remains uncertain whether they operate on similar cognitive principles. To address this, we conducted a task-generation experiment comparing human responses with those of an LLM agent (GPT-4o). We find that human task generation is consistently influenced by psychological drivers, including personal values (e.g., Openness to Change) and cognitive style. Even when these psychological drivers are explicitly provided to the LLM, it fails to reflect the corresponding behavioral patterns. They produce tasks that are markedly less social, less physical, and thematically biased toward abstraction. Interestingly, while the LLM's tasks were perceived as more fun and novel, this highlights a disconnect between its linguistic proficiency and its capacity to generate human-like, embodied goals.We conclude that there is a core gap between the value-driven, embodied nature of human cognition and the statistical patterns of LLMs, highlighting the necessity of incorporating intrinsic motivation and physical grounding into the design of more human-aligned agents.