🤖 AI Summary
This study addresses critical limitations of existing online harm reduction information channels—namely, insufficient adaptability, accessibility, and responsiveness to stigma. We adopt a participatory design approach, co-facilitating workshops with people who use drugs (PWUD) and harm reduction experts to identify high-priority use cases and develop an “LLM–Harm Reduction Principles Alignment” co-design framework. Integrating qualitative analysis with technical feasibility assessment, we demonstrate LLMs’ distinctive capabilities in real-time response, multilingual support, and stigma-mitigating interaction. Our key contributions include: (1) the first ethics-centered LLM design framework specifically for PWUD, grounded in contextual sensitivity, rigorously defined safety boundaries, and power-sharing mechanisms; and (2) actionable design guidelines and risk mitigation strategies that advance methodological rigor for health-equity-oriented AI systems. (149 words)
📝 Abstract
Access to accurate and actionable harm reduction information can directly impact the health outcomes of People Who Use Drugs (PWUD), yet existing online channels often fail to meet their diverse and dynamic needs due to limitations in adaptability, accessibility, and the pervasive impact of stigma. Large Language Models (LLMs) present a novel opportunity to enhance information provision, but their application in such a high-stakes domain is under-explored and presents socio-technical challenges. This paper investigates how LLMs can be responsibly designed to support the information needs of PWUD. Through a qualitative workshop involving diverse stakeholder groups (academics, harm reduction practitioners, and an online community moderator), we explored LLM capabilities, identified potential use cases, and delineated core design considerations. Our findings reveal that while LLMs can address some existing information barriers (e.g., by offering responsive, multilingual, and potentially less stigmatising interactions), their effectiveness is contingent upon overcoming challenges related to ethical alignment with harm reduction principles, nuanced contextual understanding, effective communication, and clearly defined operational boundaries. We articulate design pathways emphasising collaborative co-design with experts and PWUD to develop LLM systems that are helpful, safe, and responsibly governed. This work contributes empirically grounded insights and actionable design considerations for the responsible development of LLMs as supportive tools within the harm reduction ecosystem.