Value-Based Large Language Model Agent Simulation for Mutual Evaluation of Trust and Interpersonal Closeness

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether value similarity drives trust and interpersonal intimacy formation among LLM-based agents in artificial societies. Method: Leveraging prompt engineering to explicitly modulate agents’ value orientations, we designed a two-stage dialogic experimental framework and conducted cross-lingual controlled experiments in English and Japanese settings; agents autonomously evaluated mutual trust and intimacy. Contribution/Results: Value similarity significantly and positively predicted both trust and intimacy scores across agents, with robust, language-invariant effects. This work provides the first empirical validation that LLM agents can serve as controllable experimental testbeds for foundational social science theories—specifically, the value-similarity hypothesis—thereby establishing a novel paradigm and methodological foundation for artificial society modeling and computational social science.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have emerged as powerful tools for simulating complex social phenomena using human-like agents with specific traits. In human societies, value similarity is important for building trust and close relationships; however, it remains unexplored whether this principle holds true in artificial societies comprising LLM agents. Therefore, this study investigates the influence of value similarity on relationship-building among LLM agents through two experiments. First, in a preliminary experiment, we evaluated the controllability of values in LLMs to identify the most effective model and prompt design for controlling the values. Subsequently, in the main experiment, we generated pairs of LLM agents imbued with specific values and analyzed their mutual evaluations of trust and interpersonal closeness following a dialogue. The experiments were conducted in English and Japanese to investigate language dependence. The results confirmed that pairs of agents with higher value similarity exhibited greater mutual trust and interpersonal closeness. Our findings demonstrate that the LLM agent simulation serves as a valid testbed for social science theories, contributes to elucidating the mechanisms by which values influence relationship building, and provides a foundation for inspiring new theories and insights into the social sciences.
Problem

Research questions and friction points this paper is trying to address.

Investigates value similarity's impact on LLM agent relationships
Evaluates controllability of values in LLMs across languages
Tests if value similarity boosts trust and closeness in AI societies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents simulate social value similarity effects
Controlled value experiments in English and Japanese
Value similarity enhances trust and closeness
🔎 Similar Papers
No similar papers found.
Y
Yuki Sakamoto
The University of Osaka, Graduate School of Engineering Science, Toyonaka 5608531, Japan
Takahisa Uchida
Takahisa Uchida
Osaka University
dialogue systemhuman-robot interactionuser modelingcognitive model
Hiroshi Ishiguro
Hiroshi Ishiguro
Osaka University
robotics