Generative Confidants: How do People Experience Trust in Emotional Support from Generative AI?

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of how users develop and experience trust in generative AI within emotional support contexts. Through an analysis of user diaries, authentic chat logs, and semi-structured in-depth interviews, it investigates the mechanisms of trust formation in informal, unsupervised human–AI interactions. The research identifies three novel trust drivers: familiarity fostered by personalization, refined mental models of AI capabilities, and a sense of conversational control. It also reveals a dual effect of AI linguistic homogenization on trust perceptions—enhancing predictability while simultaneously blurring the perceived boundaries of AI’s nature. These findings offer theoretical grounding and practical guidance for the design and ethical deployment of generative AI in affective and psychological support applications.

Technology Category

Application Category

📝 Abstract
People are increasingly turning to generative AI (e.g., ChatGPT, Gemini, Copilot) for emotional support and companionship. While trust is likely to play a central role in enabling these informal and unsupervised interactions, we still lack an understanding of how people develop and experience it in this context. Seeking to fill this gap, we recruited 24 frequent users of generative AI for emotional support and conducted a qualitative study consisting of diary entries about interactions, transcripts of chats with AI, and in-depth interviews. Our results suggest important novel drivers of trust in this context: familiarity emerging from personalisation, nuanced mental models of generative AI, and awareness of people's control over conversations. Notably, generative AI's homogeneous use of personalised, positive, and persuasive language appears to promote some of these trust-building factors. However, this also seems to discourage other trust-related behaviours, such as remembering that generative AI is a machine trained to converse in human language. We present implications for future research that are likely to become critical as the use of generative AI for emotional support increasingly overlaps with therapeutic work.
Problem

Research questions and friction points this paper is trying to address.

generative AI
trust
emotional support
human-AI interaction
mental models
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative AI
trust
emotional support
personalization
mental models
🔎 Similar Papers
No similar papers found.
R
Riccardo Volpato
School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead St, Glasgow, G12 8QB, United Kingdom; School of Computing Science, University of Glasgow, 18 Lilybank Gardens, Glasgow, G12 8RZ, United Kingdom
Simone Stumpf
Simone Stumpf
University of Glasgow
Human-Computer InteractionAIResponsible AIEnd-user Development
L
Lisa DeBruine
School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead St, Glasgow, G12 8QB, United Kingdom