Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of trust deficiency in AI ecosystems hindering the deployment of trustworthy AI. We propose a modeling framework integrating evolutionary game theory (EGT) with strategic large language model (LLM) agents to characterize strategy evolution among developers, regulators, and users across repeated interactions. Our key contributions include: (1) empirical evidence that LLM agents exhibit significantly more “pessimistic” strategies—characterized by lower trust and higher defensive behavior—than classical game-theoretic agents, with trust thresholds highly model-specific; (2) identification of a critical fragility: users’ conditional trust in regulators undermines social contracts, whereas unconditional trust coupled with regulator reputation forms a positive feedback loop essential for safe AI development; and (3) quantitative validation of incentive mechanisms’ effective boundaries, yielding a predictive behavioral model and actionable guidelines for AI regulatory policy design.

Technology Category

Application Category

📝 Abstract
There is general agreement that fostering trust and cooperation within the AI development ecosystem is essential to promote the adoption of trustworthy AI systems. By embedding Large Language Model (LLM) agents within an evolutionary game-theoretic framework, this paper investigates the complex interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios. Evolutionary game theory (EGT) is used to quantitatively model the dilemmas faced by each actor, and LLMs provide additional degrees of complexity and nuances and enable repeated games and incorporation of personality traits. Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more"pessimistic"(not trusting and defective) stances than pure game-theoretic agents. We observe that, in case of full trust by users, incentives are effective to promote effective regulation; however, conditional trust may deteriorate the"social pact". Establishing a virtuous feedback between users' trust and regulators' reputation thus appears to be key to nudge developers towards creating safe AI. However, the level at which this trust emerges may depend on the specific LLM used for testing. Our results thus provide guidance for AI regulation systems, and help predict the outcome of strategic LLM agents, should they be used to aid regulation itself.
Problem

Research questions and friction points this paper is trying to address.

Investigates trust dynamics among AI developers, regulators, users.
Models strategic choices under different regulatory scenarios using EGT.
Analyzes LLM agent behaviors impacting AI regulation effectiveness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary game theory models AI dilemmas
LLMs enable repeated games with traits
Trust-reputation feedback nudges safe AI
Alessio Buscemi
Alessio Buscemi
Luxembourg Institute of Science and Technology
Large Language ModelsAIMachine LearningAutomotive networks
Daniele Proverbio
Daniele Proverbio
Postdoc, University of Trento
Dynamical systemsTheoretical biologyCritical transitionsComplex SystemsRobustness
P
Paolo Bova
School Computing, Engineering and Digital Technologies, Teesside University
N
Nataliya Balabanova
School of Mathematics, University of Birmingham
A
Adeela Bashir
School Computing, Engineering and Digital Technologies, Teesside University
T
Theodor Cimpeanu
School of Mathematics and Statistics, University of St Andrews
H
Henrique Correia da Fonseca
INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
M
Manh Hong Duong
School of Mathematics, University of Birmingham
E
Elias Fernández Domingos
Machine Learning Group, Université libre de Bruxelles; AI Lab, Vrije Universiteit Brussel
A
Antonio M. Fernandes
INESC-ID and Instituto Superior Técnico, Universidade de Lisboa
M
Marcus Krellner
School of Mathematics and Statistics, University of St Andrews
N
Ndidi Bianca Ogbo
School Computing, Engineering and Digital Technologies, Teesside University
Simon T. Powers
Simon T. Powers
Division of Computing Science and Mathematics, University of Stirling
Multi-Agent SystemsSocio-Technical SystemsInstitutionsTrustGame Theory
Fernando P. Santos
Fernando P. Santos
Informatics Institute (IvI), University of Amsterdam
multiagent systemscomplex systemsevolutionary game theorynetwork sciencealgorithmic fairness
Z
Zia Ush Shamszaman
School Computing, Engineering and Digital Technologies, Teesside University
Z
Zhao Song
School Computing, Engineering and Digital Technologies, Teesside University
A
A. D. Stefano
School Computing, Engineering and Digital Technologies, Teesside University
The Anh Han
The Anh Han
Professor of Computer Science, Teesside University
Evolutionary Game TheoryArtificial IntelligenceEvolution of CooperationMulti-agent Systems