YuLan-OneSim: Towards the Next Generation of Social Simulator with Large Language Models

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses key limitations of conventional social simulators—high modeling barriers, poor scenario generalizability, scalability constraints, and insufficient automation in analysis—by proposing the first large language model (LLM)-driven next-generation social simulation platform. Methodologically, it integrates natural-language-based zero-code modeling, a curated library of 50 interdisciplinary default scenarios, a feedback-driven LLM dynamic evolution mechanism, a distributed simulation architecture supporting up to 100,000 agents, and an end-to-end AI social researcher system. Its core contribution is the novel “five-dimensional synergy” paradigm: (1) programming-free semantic modeling; (2) plug-and-play multi-domain scenario support; (3) evolvable simulation via online LLM fine-tuning; (4) high-concurrency, stable horizontal scalability; and (5) a closed-loop analytical workflow—from research question input to automated generation of structured technical reports. Experiments demonstrate >92% accuracy in automatic scenario construction, stable throughput at scale (10,000+ agents), and over 10× improvement in AI researcher analytical efficiency.

Technology Category

Application Category

📝 Abstract
Leveraging large language model (LLM) based agents to simulate human social behaviors has recently gained significant attention. In this paper, we introduce a novel social simulator called YuLan-OneSim. Compared to previous works, YuLan-OneSim distinguishes itself in five key aspects: (1) Code-free scenario construction: Users can simply describe and refine their simulation scenarios through natural language interactions with our simulator. All simulation code is automatically generated, significantly reducing the need for programming expertise. (2) Comprehensive default scenarios: We implement 50 default simulation scenarios spanning 8 domains, including economics, sociology, politics, psychology, organization, demographics, law, and communication, broadening access for a diverse range of social researchers. (3) Evolvable simulation: Our simulator is capable of receiving external feedback and automatically fine-tuning the backbone LLMs, significantly enhancing the simulation quality. (4) Large-scale simulation: By developing a fully responsive agent framework and a distributed simulation architecture, our simulator can handle up to 100,000 agents, ensuring more stable and reliable simulation results. (5) AI social researcher: Leveraging the above features, we develop an AI social researcher. Users only need to propose a research topic, and the AI researcher will automatically analyze the input, construct simulation environments, summarize results, generate technical reports, review and refine the reports--completing the social science research loop. To demonstrate the advantages of YuLan-OneSim, we conduct experiments to evaluate the quality of the automatically generated scenarios, the reliability, efficiency, and scalability of the simulation process, as well as the performance of the AI social researcher.
Problem

Research questions and friction points this paper is trying to address.

Enables code-free social simulation via natural language interaction
Supports large-scale simulations with up to 100,000 evolvable agents
Automates social research loop with AI-driven analysis and reporting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-free scenario construction via natural language
Evolvable simulation with automatic LLM fine-tuning
Large-scale simulation supporting 100,000 agents
🔎 Similar Papers
No similar papers found.
L
Lei Wang
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
H
Heyang Gao
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Xiaohe Bo
Xiaohe Bo
Gaoling School of Artificial Intelligence, Renmin University of China
large language models
X
Xu Chen
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Ji-Rong Wen
Ji-Rong Wen
Gaoling School of Artificial Intelligence, Renmin University of China
Large Language ModelWeb SearchInformation RetrievalMachine Learning