Normative Common Ground Replication (NormCoRe): Replication-by-Translation for Studying Norms in Multi-agent AI

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches often reduce norms to static alignment objectives, overlooking the dynamic emergence of norms in multi-agent systems and the discrepancies between human and AI norm generation. This work proposes NormCoRe, a novel framework that introduces a “copy-as-translation” methodology to systematically map behavioral science experiments—such as those on distributive justice—into multi-agent AI environments. By manipulating foundation models and linguistically encoded role assignments, NormCoRe establishes a comparable experimental platform that enables systematic analysis and documentation of norm dynamics. The framework successfully replicates classic behavioral studies, revealing that AI agents’ normative judgments significantly diverge from human baselines and are sensitive to both model choice and role formulation, thereby offering a new pathway for understanding and designing social norms in artificial intelligence systems.

Technology Category

Application Category

📝 Abstract
In the late 2010s, the fashion trend NormCore framed sameness as a signal of belonging, illustrating how norms emerge through collective coordination. Today, similar forms of normative coordination can be observed in systems based on Multi-agent Artificial Intelligence (MAAI), as AI-based agents deliberate, negotiate, and converge on shared decisions in fairness-sensitive domains. Yet, existing empirical approaches often treat norms as targets for alignment or replication, implicitly assuming equivalence between human subjects and AI agents and leaving collective normative dynamics insufficiently examined. To address this gap, we propose Normative Common Ground Replication (NormCoRe), a novel methodological framework to systematically translate the design of human subject experiments into MAAI environments. Building on behavioral science, replication research, and state-of-the-art MAAI architectures, NormCoRe maps the structural layers of human subject studies onto the design of AI agent studies, enabling systematic documentation of study design and analysis of norms in MAAI. We demonstrate the utility of NormCoRe by replicating a seminal experimental study on distributive justice, in which participants negotiate fairness principles under a "veil of ignorance". We show that normative judgments in AI agent studies can differ from human baselines and are sensitive to the choice of the foundation model and the language used to instantiate agent personas. Our work provides a principled pathway for analyzing norms in MAAI and helps to guide, reflect, and document design choices whenever AI agents are used to automate or support tasks formerly carried out by humans.
Problem

Research questions and friction points this paper is trying to address.

norms
multi-agent AI
collective coordination
normative dynamics
human-AI equivalence
Innovation

Methods, ideas, or system contributions that make the work stand out.

NormCoRe
multi-agent AI
norm replication
experimental translation
distributive justice
🔎 Similar Papers
No similar papers found.