🤖 AI Summary
This study investigates whether value similarity drives trust and interpersonal intimacy formation among LLM-based agents in artificial societies. Method: Leveraging prompt engineering to explicitly modulate agents’ value orientations, we designed a two-stage dialogic experimental framework and conducted cross-lingual controlled experiments in English and Japanese settings; agents autonomously evaluated mutual trust and intimacy. Contribution/Results: Value similarity significantly and positively predicted both trust and intimacy scores across agents, with robust, language-invariant effects. This work provides the first empirical validation that LLM agents can serve as controllable experimental testbeds for foundational social science theories—specifically, the value-similarity hypothesis—thereby establishing a novel paradigm and methodological foundation for artificial society modeling and computational social science.
📝 Abstract
Large language models (LLMs) have emerged as powerful tools for simulating complex social phenomena using human-like agents with specific traits. In human societies, value similarity is important for building trust and close relationships; however, it remains unexplored whether this principle holds true in artificial societies comprising LLM agents. Therefore, this study investigates the influence of value similarity on relationship-building among LLM agents through two experiments. First, in a preliminary experiment, we evaluated the controllability of values in LLMs to identify the most effective model and prompt design for controlling the values. Subsequently, in the main experiment, we generated pairs of LLM agents imbued with specific values and analyzed their mutual evaluations of trust and interpersonal closeness following a dialogue. The experiments were conducted in English and Japanese to investigate language dependence. The results confirmed that pairs of agents with higher value similarity exhibited greater mutual trust and interpersonal closeness. Our findings demonstrate that the LLM agent simulation serves as a valid testbed for social science theories, contributes to elucidating the mechanisms by which values influence relationship building, and provides a foundation for inspiring new theories and insights into the social sciences.