🤖 AI Summary
This study addresses the cognitive biases and ethical risks arising from users’ misattribution of human-like capacities to large language model (LLM)-driven chatbots. Drawing on philosophical inquiry and human-computer interaction theory, the authors develop a critical analytical framework to systematically examine the fundamental differences between human–LLM interactions and genuine interpersonal dialogue. The work argues for proactively dispelling the “anthropomorphic illusion” through intentional front-end design and advocates for ethical interaction principles centered on transparency and honesty. By clarifying the ontological and functional boundaries between human and artificial agents, this research offers theoretical guidance for designing LLM interfaces that foster more responsible and trustworthy AI interactions.
📝 Abstract
Conversation with chatbots based on Large Language Models (LLMs) such as ChatGPT has become one of the major forms of interaction with Artificial Intelligence (AI) in everyday life. What makes this interaction so convenient is that interacting with LLMs feels so natural, and resembles what we know from real, human conversations. At the same time, this seeming similarity is part of one of the ethical challenges of AI design, since it activates many misleading ideas about AI. We discuss similarities and differences between human-AI-conversations and interpersonal conversation and highlight starting points for more ethical design of AI at the front-end.