Relational Norms for Human-AI Cooperation

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates whether and how human social relational norms—such as hierarchy, care, reciprocity, and pair-bonding—can be appropriately adapted to human-AI interactions, particularly when AI assumes anthropomorphized roles (e.g., mentor, mental health supporter, companion). It identifies a core problem: AI’s fundamental ontological features—including absence of consciousness and immunity to fatigue—undermine its capacity to authentically instantiate such norms, risking normative misalignment, emotional dependency, and interpersonal spillover effects. Methodologically, the study develops the first interdisciplinary conceptual framework centered on “relational appropriateness,” defining normative boundaries for AI across social roles through integrated philosophical analysis, psychological experimentation, relational science modeling, and ethical reasoning. The contribution is a systematic, actionable norm-mapping guideline. These findings provide a theoretical foundation and practical roadmap for AI design, human-AI interaction ethics, and policy development, advancing a human-centered, trustworthy, and sustainable human-AI collaboration paradigm.

Technology Category

Application Category

📝 Abstract
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
Problem

Research questions and friction points this paper is trying to address.

Examining human relational norms for AI interactions.
Assessing AI's ability to fulfill relationship-specific functions.
Proposing norms to ensure ethical human-AI cooperation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-AI relational norms design
AI adherence to human norms
Ethical AI-human interaction shaping
Brian D. Earp
Brian D. Earp
Associate Professor, National University of Singapore and Research Associate, University of Oxford
BioethicsPhilosophy of Science & AIRelational Moral PsychologySex & GenderChildren's Rights
S
Sebastian Porsdam Mann
Center for Advanced Studies in Bioscience Innovation Law (CeBIL), Faculty of Law, University of Copenhagen; Faculty of Law, University of Oxford; and Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
M
Mateo Aboy
Centre for Law, Medicine and Life Sciences and Center of Intellectual Property and Information Law, Faculty of Law, University of Cambridge, UK
Edmond Awad
Edmond Awad
Department of Economics, University of Exeter Business School
Computational Social ScienceArtificial IntelligenceComputational EthicsMulti-agent SystemsArgumentation
M
Monika Betzler
Faculty of Philosophy, Ludwigs-Maximilian-Universität München
M
Marietjie Botes
R
Rachel Calcott
M
Mina Caraccio
N
Nick Chater
Mark Coeckelbergh
Mark Coeckelbergh
Professor of Philosophy of Media and Technology, University of Vienna
philosophy of technologyethics
M
Mihaela Constantinescu
H
Hossein Dabbagh
K
Kate Devlin
X
Xiaojun Ding
V
V. Dranseika
J
J. A. Everett
R
Ruiping Fan
F
Faisal Feroz
Kathryn B. Francis
Kathryn B. Francis
Senior Researcher in Design Bioethics and Moral Psychology, University of Oxford
Moral psychologyBioethicsVirtual RealityJudgment-behaviour discrepancy
C
Cindy Friedman
O
Orsolya Friedrich
Iason Gabriel
Iason Gabriel
Senior Staff Research Scientist, Google DeepMind
Political TheoryMoral PhilosophyPhilosophy of AIGlobal JusticeHuman Rights
I
Ivar Hannikainen
J
Julie Hellmann
A
Arasj Khodadade Jahrome
N
N. Janardhanan
P
Paulius Jurcys
A
Andreas Kappes
M
Maryam Ali Khan
Gordon Kraft-Todd
Gordon Kraft-Todd
Partner | Ker-twang
prosocial behaviormoralityfield experimentspublic goodsempathy
M
Maximilian Kroner Dale
S
S. Laham
M
Muriel Leuenberger
J
Jonathan Lewis
P
Peng Liu
D
David M. Lyreskog
M
Matthijs Maas
J
John McMillan
E
Emil G. Mihailov
Timo Minssen
Timo Minssen
Law Professor, CeBIL Director, Univ. of Copenhagen; Research Affiliate at Cambridge & Harvard Univ.
IP & regulatory lawlaw & ethics of emerging techprivacy lawAI & quantum techdrug R&D &
J
J. Monrad
K
Kathryn Muyskens
Simon Myers
Simon Myers
S
Sven Nyholm
A
Anna Puzio
C
Christopher Register
M
Madeline G. Reinecke
Adam Safron
Adam Safron
Allen Discovery Center, Tufts University
H
Henry Shevlin
P
Peter V. Treit
C
Cristina Voinea
K
Karen Yan
A
Anda Zahiu
Renwen Zhang
Renwen Zhang
Assistant Professor, Nanyang Technological University
HCIMental HealthSocial SupportHealth CommunicationInterpersonal Communication
H
Hazem Zohny
Walter Sinnott-Armstrong
Walter Sinnott-Armstrong
Duke University
Philosophymoral psychologyneurosciencelawartificial intelligence
I
Ilina Singh
Julian Savulescu
Julian Savulescu
Chen Su Lan Centennial Professor of Medical Ethics
Medical ethicspractical ethicsapplied ethicsbioethicsneuroethics
M
Margaret S. Clark