Psychological and behavioural responses in human-agent vs. human-human interactions: a systematic review and meta-analysis

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically compares human psychological and behavioral responses in human–AI interaction versus human–human interaction, focusing on prosocial behavior, moral engagement, social cognition, trust, and task performance. Employing an interdisciplinary meta-analysis integrating frequentist and Bayesian frameworks, it synthesizes over 100 empirical studies, conducting heterogeneity assessments, meta-regression, and incorporating multidimensional moderators—including social perception, responsibility attribution, and agency judgments. Results reveal significantly lower prosociality and moral engagement toward intelligent agents, alongside reduced attributions of agency and moral patiency; however, no robust differences emerge in task performance or subjective trust—effects are highly context-dependent. The core contribution is the formulation and empirical validation of the “instrumental valuation” theory: even when functionally equivalent, humans consistently withhold intrinsic moral value from AI agents, exposing a fundamental value-identification gap in human–AI interaction.

Technology Category

Application Category

📝 Abstract
Interactive intelligent agents are being integrated across society. Despite achieving human-like capabilities, humans' responses to these agents remain poorly understood, with research fragmented across disciplines. We conducted a first systematic synthesis comparing a range of psychological and behavioural responses in matched human-agent vs. human-human dyadic interactions. A total of 162 eligible studies (146 contributed to the meta-analysis; 468 effect sizes) were included in the systematic review and meta-analysis, which integrated frequentist and Bayesian approaches. Our results indicate that individuals exhibited less prosocial behaviour and moral engagement when interacting with agents vs. humans. They attributed less agency and responsibility to agents, perceiving them as less competent, likeable, and socially present. In contrast, individuals' social alignment (i.e., alignment or adaptation of internal states and behaviours with partners), trust in partners, personal agency, task performance, and interaction experiences were generally comparable when interacting with agents vs. humans. We observed high effect-size heterogeneity for many subjective responses (i.e., social perceptions of partners, subjective trust, and interaction experiences), suggesting context-dependency of partner effects. By examining the characteristics of studies, participants, partners, interaction scenarios, and response measures, we also identified several moderators shaping partner effects. Overall, functional behaviours and interactive experiences with agents can resemble those with humans, whereas fundamental social attributions and moral/prosocial concerns lag in human-agent interactions. Agents are thus afforded instrumental value on par with humans but lack comparable intrinsic value, providing practical implications for agent design and regulation.
Problem

Research questions and friction points this paper is trying to address.

Comparing psychological and behavioral responses in human-agent versus human-human interactions
Examining how individuals exhibit less prosocial behavior toward agents than humans
Investigating differences in social attributions and moral engagement with intelligent agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic synthesis comparing human-agent and human-human interactions
Integrated frequentist and Bayesian meta-analysis approaches
Identified moderators shaping partner effects in interactions
🔎 Similar Papers
No similar papers found.
J
Jianan Zhou
Dyson School of Design Engineering, Imperial College London, London, United Kingdom
F
Fleur Corbett
Dyson School of Design Engineering, Imperial College London, London, United Kingdom
J
Joori Byun
Independent Researcher
Talya Porat
Talya Porat
Imperial College London
Human Computer InteractionHuman FactorsCognitive EngineeringDecision Making
Nejra van Zalk
Nejra van Zalk
Associate Professor in Design Psychology, Imperial College London
PsychologyHuman-tech interaction