"Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In mental health peer support, significant discrepancies exist between peers and experts in identifying crisis signals and recommending appropriate responses—compromising intervention safety and quality. To address this, we introduce the first LLM-augmented peer support training system, integrating three core components: (1) LLM-simulated distressed clients, (2) context-aware real-time response suggestions, and (3) emotion visualization. We employ a mixed-methods evaluation combining behavioral coding with in-depth expert and peer interviews. Our study provides the first empirical evidence of critical response misalignments between peers and experts in AI-mediated interactions, leading to a novel AI-coached training paradigm anchored by expert supervision. Results demonstrate statistically significant improvements in peers’ crisis signal detection accuracy and response appropriateness. The system received high acceptance from both peer supporters and clinical experts, enhancing training fidelity and safety awareness. (149 words)

Technology Category

Application Category

📝 Abstract
Mental health is a growing global concern, prompting interest in AI-driven solutions to expand access to psychosocial support. Peer support, grounded in lived experience, offers a valuable complement to professional care. However, variability in training, effectiveness, and definitions raises concerns about quality, consistency, and safety. Large Language Models (LLMs) present new opportunities to enhance peer support interactions, particularly in real-time, text-based interactions. We present and evaluate an AI-supported system with an LLM-simulated distressed client, context-sensitive LLM-generated suggestions, and real-time emotion visualisations. 2 mixed-methods studies with 12 peer supporters and 5 mental health professionals (i.e., experts) examined the system's effectiveness and implications for practice. Both groups recognised its potential to enhance training and improve interaction quality. However, we found a key tension emerged: while peer supporters engaged meaningfully, experts consistently flagged critical issues in peer supporter responses, such as missed distress cues and premature advice-giving. This misalignment highlights potential limitations in current peer support training, especially in emotionally charged contexts where safety and fidelity to best practices are essential. Our findings underscore the need for standardised, psychologically grounded training, especially as peer support scales globally. They also demonstrate how LLM-supported systems can scaffold this development--if designed with care and guided by expert oversight. This work contributes to emerging conversations on responsible AI integration in mental health and the evolving role of LLMs in augmenting peer-delivered care.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI-enhanced peer support quality and safety concerns
Identifying misalignments between peer supporters and mental health experts
Developing standardized training for LLM-supported mental health interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-simulated distressed client for peer support training
Context-sensitive LLM-generated real-time interaction suggestions
Real-time emotion visualizations during support conversations
🔎 Similar Papers
No similar papers found.