🤖 AI Summary
Current approaches to detecting suicidal ideation in online conversations predominantly rely on predefined rules, which often fail to capture implicit social influences such as conformity and imitation, leading to incomplete identification. This work proposes a Multi-Agent Causal Reasoning (MACR) framework, introducing multi-agent causal reasoning to this task for the first time. Grounded in cognitive appraisal theory, MACR generates counterfactual user responses and integrates bias-aware decision-making agents to mitigate latent biases through front-door adjustment. By synergistically combining counterfactual reasoning, causal inference, and multidimensional analysis of cognition, affect, and behavior, the method significantly enhances both accuracy and robustness in detecting suicidal ideation on real-world conversational data.
📝 Abstract
Suicide remains a pressing global public health concern. While social media platforms offer opportunities for early risk detection through online conversation trees, existing approaches face two major limitations: (1) They rely on predefined rules (e.g., quotes or relies) to log conversations that capture only a narrow spectrum of user interactions, and (2) They overlook hidden influences such as user conformity and suicide copycat behavior, which can significantly affect suicidal expression and propagation in online communities. To address these limitations, we propose a Multi-Agent Causal Reasoning (MACR) framework that collaboratively employs a Reasoning Agent to scale user interactions and a Bias-aware Decision-Making Agent to mitigate harmful biases arising from hidden influences. The Reasoning Agent integrates cognitive appraisal theory to generate counterfactual user reactions to posts, thereby scaling user interactions. It analyses these reactions through structured dimensions, i.e., cognitive, emotional, and behavioral patterns, with a dedicated sub-agent responsible for each dimension. The Bias-aware Decision-Making Agent mitigates hidden biases through a front-door adjustment strategy, leveraging the counterfactual user reactions produced by the Reasoning Agent. Through the collaboration of reasoning and bias-aware decision making, the proposed MACR framework not only alleviates hidden biases, but also enriches contextual information of user interactions with counterfactual knowledge. Extensive experiments on real-world conversational datasets demonstrate the effectiveness and robustness of MACR in identifying suicide risk.