Assess and Prompt: A Generative RL Framework for Improving Engagement in Online Mental Health Communities

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In online mental health communities (OMHCs), numerous help-seeking posts lack critical support attributes—such as emotional state, specific distress, and求助 intent—resulting in low response rates and diminished user engagement. To address this, we propose MH-COPILOT, the first framework applying generative reinforcement learning to support-attribute completion. It introduces CueTaxo, a hierarchical taxonomy of support attributes, and integrates context-aware span detection, attribute intensity classification, controllable question generation, and verifier-based reward modeling to dynamically assess missing attributes and generate precise, targeted clarifying questions during inference. Experiments across four mainstream language models demonstrate significant improvements: +28.6% in support-attribute identification accuracy and +34.1% in user response rate. Human evaluation further confirms that MH-COPILOT substantially enhances post quality and community interaction efficacy.

Technology Category

Application Category

📝 Abstract
Online Mental Health Communities (OMHCs) provide crucial peer and expert support, yet many posts remain unanswered due to missing support attributes that signal the need for help. We present a novel framework that identifies these gaps and prompts users to enrich their posts, thereby improving engagement. To support this, we introduce REDDME, a new dataset of 4,760 posts from mental health subreddits annotated for the span and intensity of three key support attributes: event what happened?, effect what did the user experience?, and requirement what support they need?. Next, we devise a hierarchical taxonomy, CueTaxo, of support attributes for controlled question generation. Further, we propose MH-COPILOT, a reinforcement learning-based system that integrates (a) contextual attribute-span identification, (b) support attribute intensity classification, (c) controlled question generation via a hierarchical taxonomy, and (d) a verifier for reward modeling. Our model dynamically assesses posts for the presence/absence of support attributes, and generates targeted prompts to elicit missing information. Empirical results across four notable language models demonstrate significant improvements in attribute elicitation and user engagement. A human evaluation further validates the model's effectiveness in real-world OMHC settings.
Problem

Research questions and friction points this paper is trying to address.

Identifying gaps in online mental health posts
Generating targeted prompts to elicit missing information
Improving user engagement in mental health communities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning system for dynamic assessment
Hierarchical taxonomy for controlled question generation
Contextual attribute-span identification and classification
🔎 Similar Papers
No similar papers found.