Simulating Generative Social Agents via Theory-Informed Workflow Design

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing social agents are predominantly designed for specific scenarios and lack a unified theoretical foundation, resulting in poor cross-context generalization and insufficient behavioral consistency and realism. This paper introduces the first generative social agent framework grounded in social cognitive theory, comprising three synergistic modules: motivation modeling, hierarchical behavior planning, and online learning—enabling high-fidelity, interpretable simulation of human social behavior. Built upon large language models, the framework deeply integrates core principles of social cognition to support dynamic adaptation and multi-agent interaction. Experimental evaluation demonstrates a 75% reduction in deviation from ground-truth behavioral data across multiple fidelity metrics compared to baseline approaches. Ablation studies confirm that each module makes significant and non-redundant contributions to behavioral accuracy.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models have demonstrated strong reasoning and role-playing capabilities, opening new opportunities for agent-based social simulations. However, most existing agents' implementations are scenario-tailored, without a unified framework to guide the design. This lack of a general social agent limits their ability to generalize across different social contexts and to produce consistent, realistic behaviors. To address this challenge, we propose a theory-informed framework that provides a systematic design process for LLM-based social agents. Our framework is grounded in principles from Social Cognition Theory and introduces three key modules: motivation, action planning, and learning. These modules jointly enable agents to reason about their goals, plan coherent actions, and adapt their behavior over time, leading to more flexible and contextually appropriate responses. Comprehensive experiments demonstrate that our theory-driven agents reproduce realistic human behavior patterns under complex conditions, achieving up to 75% lower deviation from real-world behavioral data across multiple fidelity metrics compared to classical generative baselines. Ablation studies further show that removing motivation, planning, or learning modules increases errors by 1.5 to 3.2 times, confirming their distinct and essential contributions to generating realistic and coherent social behaviors.
Problem

Research questions and friction points this paper is trying to address.

Lack of unified framework for LLM-based social agents
Difficulty generalizing across diverse social contexts
Inconsistent and unrealistic agent behavior generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theory-informed framework for LLM-based agents
Modules: motivation, action planning, learning
75% lower deviation from real-world data
🔎 Similar Papers
2024-10-06Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)Citations: 13