Exposure to Content Written by Large Language Models Can Reduce Stigma Around Opioid Use Disorder in Online Communities

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the stigma surrounding opioid use disorder (OUD) and medication-assisted treatment (MAT) in online communities. Using a preregistered randomized controlled trial—the first empirical test of large language model (LLM)-generated educational responses for stigma reduction—we found that participants exposed to LLM-authored content exhibited significantly more positive attitudes toward MAT and lower stigma scores on validated measures (e.g., the MAT Attitudes Scale and Patient Stigma Scale) compared to both human-written and no-intervention control groups. Methodologically, the study employs multi-turn LLM response generation, dual-timeframe assessment (cross-sectional and longitudinal), and rigorously validated stigma instruments. Its key contributions are: (1) establishing LLMs as scalable, standardized digital health education tools; and (2) proposing an AI-augmented paradigm to enhance inclusivity and accuracy in online health communication—setting a methodological benchmark for AI-enabled public health messaging.

Technology Category

Application Category

📝 Abstract
Widespread stigma, both in the offline and online spaces, acts as a barrier to harm reduction efforts in the context of opioid use disorder (OUD). This stigma is prominently directed towards clinically approved medications for addiction treatment (MAT), people with the condition, and the condition itself. Given the potential of artificial intelligence based technologies in promoting health equity, and facilitating empathic conversations, this work examines whether large language models (LLMs) can help abate OUD-related stigma in online communities. To answer this, we conducted a series of pre-registered randomized controlled experiments, where participants read LLM-generated, human-written, or no responses to help seeking OUD-related content in online communities. The experiment was conducted under two setups, i.e., participants read the responses either once (N = 2,141), or repeatedly for 14 days (N = 107). We found that participants reported the least stigmatized attitudes toward MAT after consuming LLM-generated responses under both the setups. This study offers insights into strategies that can foster inclusive online discourse on OUD, e.g., based on our findings LLMs can be used as an education-based intervention to promote positive attitudes and increase people's propensity toward MAT.
Problem

Research questions and friction points this paper is trying to address.

Reducing opioid use disorder stigma online
Evaluating LLMs' impact on stigma reduction
Promoting positive attitudes toward addiction treatment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used LLM-generated responses to reduce stigma
Conducted randomized controlled experiments online
Promoted positive attitudes toward MAT
Shravika Mittal
Shravika Mittal
Georgia Institute of Technology
Social ComputingNatural Language ProcessingHuman-AI Interaction
D
Darshi Shah
College of Computing, Georgia Institute of Technology, Atlanta, Georgia, USA
S
Shin Won Do
College of Computing, Georgia Institute of Technology, Atlanta, Georgia, USA
M
Mai ElSherief
Khoury College of Computer Science, Northeastern University, Boston, Massachusetts, USA
Tanushree Mitra
Tanushree Mitra
Associate Professor, Information School, University of Washington
Responsible AISocial computingHCIAuditing Online SystemsComputational Social Science
Munmun De Choudhury
Munmun De Choudhury
Georgia Institute of Technology
Computational Social ScienceSocial ComputingMental HealthLanguage