🤖 AI Summary
This study investigates how metaphorical framings of large language models (LLMs) in public discourse shape attributions of mental capacities and subsequent user behavior. Method: A two-stage randomized controlled experiment employed video-based interventions—presenting LLMs as “companions,” “tools,” or under a neutral description—followed by standardized mental attribution scales and objective measures of information reliance. Contribution/Results: The “companion” framing significantly increased attributions of intentionality, memory, and other mental properties (p < 0.001) and heightened participants’ reliance on model outputs. This is the first experimental demonstration of a causal effect of non-technical linguistic framing on social cognition of AI. The findings underscore the critical regulatory role of metaphor in public understanding of AI and human–AI interaction, offering theoretical grounding and actionable pathways for AI ethics governance and science communication practice.
📝 Abstract
How does messaging about about large language models (LLMs) in public discourse influence the way people think about and interact with these models? To answer this question, we randomly assigned participants (N = 470) to watch a short informational video presenting LLMs as either machines, tools, or companions -- or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability have intentions or remember things. We found that participants who watched the companion video reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies highlight the impact of messaging about AI -- beyond technical advances in AI -- to generate broad societal impact.