🤖 AI Summary
This paper addresses how LLM-based multi-agent systems can advance social science research while tackling challenges of reproducibility, ethics, and bias. Methodologically, it introduces a six-level evolutionary framework characterizing the paradigm shift from single-task LLM agents to multi-agent social simulation systems; proposes the first LLM agent capability taxonomy tailored for social science; and establishes multi-agent systems as a novel paradigm for social mechanism experimentation—integrating agent-based social simulation modeling, dynamic role negotiation, cross-level validation protocols, and standardized evaluation metrics. The contributions include: (1) clarifying technical boundaries and domain applicability across levels; (2) shifting social inquiry from static analysis to dynamic, generative investigation; and (3) laying a methodological foundation for trustworthy, interpretable, and auditable AI-augmented social science.
📝 Abstract
As large language models (LLMs) transition from static tools to fully agentic systems, their potential for transforming social science research has become increasingly evident. This paper introduces a structured framework for understanding the diverse applications of LLM-based agents, ranging from simple data processors to complex, multi-agent systems capable of simulating emergent social dynamics. By mapping this developmental continuum across six levels, the paper clarifies the technical and methodological boundaries between different agentic architectures, providing a comprehensive overview of current capabilities and future potential. It highlights how lower-tier systems streamline conventional tasks like text classification and data annotation, while higher-tier systems enable novel forms of inquiry, including the study of group dynamics, norm formation, and large-scale social processes. However, these advancements also introduce significant challenges, including issues of reproducibility, ethical oversight, and the risk of emergent biases. The paper critically examines these concerns, emphasizing the need for robust validation protocols, interdisciplinary collaboration, and standardized evaluation metrics. It argues that while LLM-based agents hold transformative potential for the social sciences, realizing this promise will require careful, context-sensitive deployment and ongoing methodological refinement. The paper concludes with a call for future research that balances technical innovation with ethical responsibility, encouraging the development of agentic systems that not only replicate but also extend the frontiers of social science, offering new insights into the complexities of human behavior.