MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Detecting psychological manipulation in multi-turn dialogues remains challenging due to its high concealment and severe scarcity of annotated data. To address these issues, this paper proposes a multi-task anti-curriculum distillation framework. It introduces, for the first time, an “easy-to-hard” anti-curriculum learning paradigm—reversing conventional curriculum design—integrated with evolutionary semantic augmentation (EvoSA) and speech-act-theory-guided unsupervised sample construction, alongside multi-task supervised learning and human-in-the-loop annotation. Evaluated on ReaMent, a high-quality, real-world dialogue dataset (5,000 samples) curated by the authors, the framework significantly narrows the performance gap between teacher and student models, outperforming state-of-the-art large language models across F1-score, accuracy, and other key metrics. Our core contributions are threefold: (1) pioneering the application of anti-curriculum distillation to psychological manipulation detection; (2) establishing a theory-grounded, data-efficient, and scalable lightweight detection paradigm; and (3) releasing a benchmark dataset enabling reproducible research.

Technology Category

Application Category

📝 Abstract
Mental manipulation is a subtle yet pervasive form of psychological abuse that poses serious threats to mental health. Its covert nature and the complexity of manipulation strategies make it challenging to detect, even for state-of-the-art large language models (LLMs). This concealment also hinders the manual collection of large-scale, high-quality annotations essential for training effective models. Although recent efforts have sought to improve LLM's performance on this task, progress remains limited due to the scarcity of real-world annotated datasets. To address these challenges, we propose MentalMAC, a multi-task anti-curriculum distillation method that enhances LLMs' ability to detect mental manipulation in multi-turn dialogue. Our approach includes: (i) EvoSA, an unsupervised data expansion method based on evolutionary operations and speech act theory; (ii) teacher-model-generated multi-task supervision; and (iii) progressive knowledge distillation from complex to simpler tasks. We then constructed the ReaMent dataset with 5,000 real-world dialogue samples, using a MentalMAC-distilled model to assist human annotation. Vast experiments demonstrate that our method significantly narrows the gap between student and teacher models and outperforms competitive LLMs across key evaluation metrics. All code, datasets, and checkpoints will be released upon paper acceptance. Warning: This paper contains content that may be offensive to readers.
Problem

Research questions and friction points this paper is trying to address.

Detecting subtle mental manipulation in dialogues using LLMs
Overcoming scarcity of annotated datasets for manipulation detection
Enhancing LLM performance via multi-task anti-curriculum distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised data expansion via EvoSA method
Multi-task supervision from teacher models
Anti-curriculum distillation from complex to simple
🔎 Similar Papers
No similar papers found.
Yuansheng Gao
Yuansheng Gao
Professor of Pathophysiology,Peking University,China
vascularairwaysmooth muscle cell
H
Han Bao
Zhejiang University
T
Tong Zhang
Zhejiang University
B
Bin Li
SIAT, CAS
Z
Zonghui Wang
Zhejiang University
Wenzhi Chen
Wenzhi Chen
Chang Gung University
industrial designdesign educationlearningteaching