Interactional Fairness in LLM Multi-Agent Systems: An Evaluation Framework

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of interaction fairness assessment in Large Language Model-based Multi-Agent Systems (LLM-MAS) by pioneering the adaptation of organizational justice theory to non-conscious agents. It introduces a dual-dimensional evaluation framework—Interpersonal Fairness (IF) and Informational Fairness (InfF). Methodologically, it adapts Colquitt’s justice scales and the Critical Incident Technique, designing controlled simulation experiments and resource negotiation tasks. Key variables—including agent tone, explanation quality, outcome inequality, and task framing—are systematically manipulated to quantify agent behavioral responses. Contributions include: (1) establishing “fairness as socially interpretable signals” as a novel paradigm; (2) empirically demonstrating that tone and explanation quality significantly affect decision acceptance—even under identical outcomes—and revealing context-dependent effects of IF and InfF; and (3) providing the first reusable, auditable methodology for assessing interaction fairness in LLM-MAS, enabling fairness alignment and norm-sensitive system design.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly used in multi-agent systems, questions of fairness should extend beyond resource distribution and procedural design to include the fairness of how agents communicate. Drawing from organizational psychology, we introduce a novel framework for evaluating Interactional fairness encompassing Interpersonal fairness (IF) and Informational fairness (InfF) in LLM-based multi-agent systems (LLM-MAS). We extend the theoretical grounding of Interactional Fairness to non-sentient agents, reframing fairness as a socially interpretable signal rather than a subjective experience. We then adapt established tools from organizational justice research, including Colquitt's Organizational Justice Scale and the Critical Incident Technique, to measure fairness as a behavioral property of agent interaction. We validate our framework through a pilot study using controlled simulations of a resource negotiation task. We systematically manipulate tone, explanation quality, outcome inequality, and task framing (collaborative vs. competitive) to assess how IF influences agent behavior. Results show that tone and justification quality significantly affect acceptance decisions even when objective outcomes are held constant. In addition, the influence of IF vs. InfF varies with context. This work lays the foundation for fairness auditing and norm-sensitive alignment in LLM-MAS.
Problem

Research questions and friction points this paper is trying to address.

Evaluating fairness in LLM multi-agent communication
Extending Interactional Fairness to non-sentient agents
Measuring fairness as behavioral property in agent interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel framework evaluates LLM multi-agent interactional fairness
Adapts organizational justice tools for agent fairness measurement
Validates framework via controlled resource negotiation simulations
🔎 Similar Papers
No similar papers found.