Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how metaphorical framings of large language models (LLMs) in public discourse shape attributions of mental capacities and subsequent user behavior. Method: A two-stage randomized controlled experiment employed video-based interventions—presenting LLMs as “companions,” “tools,” or under a neutral description—followed by standardized mental attribution scales and objective measures of information reliance. Contribution/Results: The “companion” framing significantly increased attributions of intentionality, memory, and other mental properties (p < 0.001) and heightened participants’ reliance on model outputs. This is the first experimental demonstration of a causal effect of non-technical linguistic framing on social cognition of AI. The findings underscore the critical regulatory role of metaphor in public understanding of AI and human–AI interaction, offering theoretical grounding and actionable pathways for AI ethics governance and science communication practice.

Technology Category

Application Category

📝 Abstract
How does messaging about about large language models (LLMs) in public discourse influence the way people think about and interact with these models? To answer this question, we randomly assigned participants (N = 470) to watch a short informational video presenting LLMs as either machines, tools, or companions -- or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability have intentions or remember things. We found that participants who watched the companion video reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies highlight the impact of messaging about AI -- beyond technical advances in AI -- to generate broad societal impact.
Problem

Research questions and friction points this paper is trying to address.

How messaging about LLMs affects mental capacity attribution
Impact of presenting LLMs as companions versus tools
Effects of AI messaging on societal perceptions and reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomly assigned participants to watch informational videos
Assessed mental capacities attributed to LLMs after exposure
Measured effects on reliance for factual information seeking
🔎 Similar Papers
No similar papers found.