🤖 AI Summary
This study addresses the stigma surrounding opioid use disorder (OUD) and medication-assisted treatment (MAT) in online communities. Using a preregistered randomized controlled trial—the first empirical test of large language model (LLM)-generated educational responses for stigma reduction—we found that participants exposed to LLM-authored content exhibited significantly more positive attitudes toward MAT and lower stigma scores on validated measures (e.g., the MAT Attitudes Scale and Patient Stigma Scale) compared to both human-written and no-intervention control groups. Methodologically, the study employs multi-turn LLM response generation, dual-timeframe assessment (cross-sectional and longitudinal), and rigorously validated stigma instruments. Its key contributions are: (1) establishing LLMs as scalable, standardized digital health education tools; and (2) proposing an AI-augmented paradigm to enhance inclusivity and accuracy in online health communication—setting a methodological benchmark for AI-enabled public health messaging.
📝 Abstract
Widespread stigma, both in the offline and online spaces, acts as a barrier to harm reduction efforts in the context of opioid use disorder (OUD). This stigma is prominently directed towards clinically approved medications for addiction treatment (MAT), people with the condition, and the condition itself. Given the potential of artificial intelligence based technologies in promoting health equity, and facilitating empathic conversations, this work examines whether large language models (LLMs) can help abate OUD-related stigma in online communities. To answer this, we conducted a series of pre-registered randomized controlled experiments, where participants read LLM-generated, human-written, or no responses to help seeking OUD-related content in online communities. The experiment was conducted under two setups, i.e., participants read the responses either once (N = 2,141), or repeatedly for 14 days (N = 107). We found that participants reported the least stigmatized attitudes toward MAT after consuming LLM-generated responses under both the setups. This study offers insights into strategies that can foster inclusive online discourse on OUD, e.g., based on our findings LLMs can be used as an education-based intervention to promote positive attitudes and increase people's propensity toward MAT.