Multi-Agent Causal Reasoning for Suicide Ideation Detection Through Online Conversations

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current approaches to detecting suicidal ideation in online conversations predominantly rely on predefined rules, which often fail to capture implicit social influences such as conformity and imitation, leading to incomplete identification. This work proposes a Multi-Agent Causal Reasoning (MACR) framework, introducing multi-agent causal reasoning to this task for the first time. Grounded in cognitive appraisal theory, MACR generates counterfactual user responses and integrates bias-aware decision-making agents to mitigate latent biases through front-door adjustment. By synergistically combining counterfactual reasoning, causal inference, and multidimensional analysis of cognition, affect, and behavior, the method significantly enhances both accuracy and robustness in detecting suicidal ideation on real-world conversational data.

Technology Category

Application Category

📝 Abstract
Suicide remains a pressing global public health concern. While social media platforms offer opportunities for early risk detection through online conversation trees, existing approaches face two major limitations: (1) They rely on predefined rules (e.g., quotes or relies) to log conversations that capture only a narrow spectrum of user interactions, and (2) They overlook hidden influences such as user conformity and suicide copycat behavior, which can significantly affect suicidal expression and propagation in online communities. To address these limitations, we propose a Multi-Agent Causal Reasoning (MACR) framework that collaboratively employs a Reasoning Agent to scale user interactions and a Bias-aware Decision-Making Agent to mitigate harmful biases arising from hidden influences. The Reasoning Agent integrates cognitive appraisal theory to generate counterfactual user reactions to posts, thereby scaling user interactions. It analyses these reactions through structured dimensions, i.e., cognitive, emotional, and behavioral patterns, with a dedicated sub-agent responsible for each dimension. The Bias-aware Decision-Making Agent mitigates hidden biases through a front-door adjustment strategy, leveraging the counterfactual user reactions produced by the Reasoning Agent. Through the collaboration of reasoning and bias-aware decision making, the proposed MACR framework not only alleviates hidden biases, but also enriches contextual information of user interactions with counterfactual knowledge. Extensive experiments on real-world conversational datasets demonstrate the effectiveness and robustness of MACR in identifying suicide risk.
Problem

Research questions and friction points this paper is trying to address.

suicide ideation detection
online conversations
hidden influences
user interactions
multi-agent causal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Causal Reasoning
Counterfactual Reasoning
Bias Mitigation
Suicide Ideation Detection
Front-Door Adjustment
🔎 Similar Papers
No similar papers found.
Jun Li
Jun Li
The Hong Kong University of Science and Technology
AI ECGBiosignalAI for Digital HealthHealth Data ScienceAI for Healthcare
X
Xiangmeng Wang
The Hong Kong Polytechnic University, Hong Kong, China
H
Haoyang Li
The Hong Kong Polytechnic University, Hong Kong, China
Y
Yifei Yan
City University of Hong Kong, Hong Kong, China
S
Shijie Zhang
Shenzhen MSU-BIT University, Shenzhen, China
H
Hong Va Leong
The Hong Kong Polytechnic University, Hong Kong, China
L
Ling Feng
Tsinghua University, Beijing, China
N
Nancy Xiaonan Yu
City University of Hong Kong, Hong Kong, China
Qing Li
Qing Li
Chair Professor (Data Science), the Hong Kong Polytechnic University
databasedata warehousemultimedia retrievalweb servicese-learning