A Scoping Review of the Ethical Perspectives on Anthropomorphising Large Language Model-Based Conversational Agents

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the ethical risks—such as deception and overreliance—arising from anthropomorphism in large language model (LLM) conversational agents, an area lacking systematic synthesis and a unified evaluation framework. Employing a scoping review methodology, the research integrates multi-source literature from five databases and three preprint platforms, combining bibliometric and content analyses to map the interdisciplinary ethical landscape for the first time. Findings reveal that while scholars commonly adopt attribution-based definitions of anthropomorphism, operationalizations remain highly heterogeneous. Moreover, ethical discourse is predominantly risk-oriented and lacks robust empirical grounding, limiting its utility for governance. In response, this work proposes an integrative research agenda that bridges interaction effects with practical governance mechanisms.

Technology Category

Application Category

📝 Abstract
Anthropomorphisation -- the phenomenon whereby non-human entities are ascribed human-like qualities -- has become increasingly salient with the rise of large language model (LLM)-based conversational agents (CAs). Unlike earlier chatbots, LLM-based CAs routinely generate interactional and linguistic cues, such as first-person self-reference, epistemic and affective expressions that empirical work shows can increase engagement. On the other hand, anthropomorphisation raises ethical concerns, including deception, overreliance, and exploitative relationship framing, while some authors argue that anthropomorphic interaction may support autonomy, well-being, and inclusion. Despite increasing interest in the phenomenon, literature remains fragmented across domains and varies substantially in how it defines, operationalizes, and normatively evaluates anthropomorphisation. This scoping review maps ethically oriented work on anthropomorphising LLM-based CAs across five databases and three preprint repositories. We synthesize (1) conceptual foundations, (2) ethical challenges and opportunities, and (3) methodological approaches. We find convergence on attribution-based definitions but substantial divergence in operationalization, a predominantly risk-forward normative framing, and limited empirical work that links observed interaction effects to actionable governance guidance. We conclude with a research agenda and design/governance recommendations for ethically deploying anthropomorphic cues in LLM-based conversational agents.
Problem

Research questions and friction points this paper is trying to address.

anthropomorphisation
large language models
conversational agents
ethical concerns
scoping review
Innovation

Methods, ideas, or system contributions that make the work stand out.

anthropomorphisation
large language models
conversational agents
ethical AI
scoping review
Andrea Ferrario
Andrea Ferrario
University of Zurich, SUPSI/IDSIA, and ETH Zurich
Philosophy of AIMachine LearningAI in Medicine
R
R. Vinay
Institute of Biomedical Ethics and History of Medicine, University of Zürich, Zürich, Switzerland
M
Matteo Casserini
SUPSI, Dipartimento Tecnologie Innovative, Lugano, Switzerland
Alessandro Facchini
Alessandro Facchini
Associate Professor, SUPSI, IDSIA USI-SUPSI
LogicArtificial intelligenceEpistemologyEthics