Large Language Models in Peer-Run Community Behavioral Health Services: Understanding Peer Specialists and Service Users'Perspectives on Opportunities, Risks, and Mitigation Strategies

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the challenges large language models (LLMs) pose to contextuality, trust, and autonomy when introduced into peer-led community behavioral health services. In collaboration with a New Jersey-based peer support organization, the authors employed a comicboarding co-design methodology, engaging 16 peer specialists and 10 service users in workshops to collaboratively develop and evaluate an LLM-powered recommendation system prototype. The work proposes a “lived experience–in-the-loop” design philosophy, positioning LLMs not as clinical tools but as relational collaborators in high-stakes care contexts, with an emphasis on co-constructing trust. The project identifies three key tensions—scalability versus localization, trust maintenance versus relational dynamics, and efficiency gains versus peer autonomy—and outlines corresponding opportunities, risks, and mitigation strategies.

Technology Category

Application Category

📝 Abstract
Peer-run organizations (PROs) provide critical, recovery-based behavioral health support rooted in lived experience. As large language models (LLMs) enter this domain, their scale, conversationality, and opacity introduce new challenges for situatedness, trust, and autonomy. Partnering with Collaborative Support Programs of New Jersey (CSPNJ), a statewide PRO in the Northeastern United States, we used comicboarding, a co-design method, to conduct workshops with 16 peer specialists and 10 service users exploring perceptions of integrating an LLM-based recommendation system into peer support. Findings show that depending on how LLMs are introduced, constrained, and co-used, they can reconfigure in-room dynamics by sustaining, undermining, or amplifying the relational authority that grounds peer support. We identify opportunities, risks, and mitigation strategies across three tensions: bridging scale and locality, protecting trust and relational dynamics, and preserving peer autonomy amid efficiency gains. We contribute design implications that center lived-experience-in-the-loop, reframe trust as co-constructed, and position LLMs not as clinical tools but as relational collaborators in high-stakes, community-led care.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Peer-run organizations
Behavioral health
Trust
Autonomy
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
peer-run organizations
co-design
relational collaboration
lived experience
🔎 Similar Papers
No similar papers found.
C
Cindy Peng
School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
M
Megan Chai
Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
G
Gao Mo
School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Naveen Raman
Naveen Raman
Machine Learning PhD, Carnegie Mellon University
N
Ningjing Tang
Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
S
Shannon Pagdon
School of Social Work, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
M
Margaret Swarbrick
Graduate School of Applied and Professional Psychology, Rutgers University, Piscataway, New Jersey, USA
N
Nev Jones
School of Social Work, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
Fei Fang
Fei Fang
Carnegie Mellon University
Artificial IntelligenceGame TheoryOptimization
Hong Shen
Hong Shen
Assistant Professor, Carnegie Mellon University
human-computer interactionsocial computingcommunicationspublic policy