Why We Need to Destroy the Illusion of Speaking to A Human: Critical Reflections On Ethics at the Front-End for LLMs

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the cognitive biases and ethical risks arising from users’ misattribution of human-like capacities to large language model (LLM)-driven chatbots. Drawing on philosophical inquiry and human-computer interaction theory, the authors develop a critical analytical framework to systematically examine the fundamental differences between human–LLM interactions and genuine interpersonal dialogue. The work argues for proactively dispelling the “anthropomorphic illusion” through intentional front-end design and advocates for ethical interaction principles centered on transparency and honesty. By clarifying the ontological and functional boundaries between human and artificial agents, this research offers theoretical guidance for designing LLM interfaces that foster more responsible and trustworthy AI interactions.

Technology Category

Application Category

📝 Abstract
Conversation with chatbots based on Large Language Models (LLMs) such as ChatGPT has become one of the major forms of interaction with Artificial Intelligence (AI) in everyday life. What makes this interaction so convenient is that interacting with LLMs feels so natural, and resembles what we know from real, human conversations. At the same time, this seeming similarity is part of one of the ethical challenges of AI design, since it activates many misleading ideas about AI. We discuss similarities and differences between human-AI-conversations and interpersonal conversation and highlight starting points for more ethical design of AI at the front-end.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
human-AI interaction
ethical design
conversational AI
illusion of humanity
Innovation

Methods, ideas, or system contributions that make the work stand out.

ethical AI design
human-AI interaction
LLM front-end
conversation illusion
responsible AI
🔎 Similar Papers
No similar papers found.
Sarah Diefenbach
Sarah Diefenbach
Ludwig-Maximilians-University
D
Daniel Ullrich
LMU Munich, Department of Media Informatics