Detecting Mental Manipulation in Speech via Synthetic Multi-Speaker Dialogue

📅 2026-01-13
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in psychological manipulation detection by extending the task beyond text to the underexplored speech modality. The authors introduce SPEECHMENTALMANIP, the first benchmark for detecting covert manipulative language in speech, constructed by augmenting existing textual datasets with high-fidelity, speaker-consistent multi-speaker synthetic audio. Systematic evaluation combines few-shot audio-language models with human annotation to assess performance. Experimental results reveal that while models achieve high specificity in identifying manipulative utterances in speech, their recall is substantially lower than in the textual domain. Human evaluators also exhibit greater uncertainty, highlighting the inherent ambiguity of manipulative cues in spoken language and fundamental differences in cross-modal perception. These findings underscore the unique challenges posed by speech-based manipulation detection and call for modality-aware approaches.

Technology Category

Application Category

📝 Abstract
Mental manipulation, the strategic use of language to covertly influence or exploit others, is a newly emerging task in computational social reasoning. Prior work has focused exclusively on textual conversations, overlooking how manipulative tactics manifest in speech. We present the first study of mental manipulation detection in spoken dialogues, introducing a synthetic multi-speaker benchmark SPEECHMENTALMANIP that augments a text-based dataset with high-quality, voice-consistent Text-to-Speech rendered audio. Using few-shot large audio-language models and human annotation, we evaluate how modality affects detection accuracy and perception. Our results reveal that models exhibit high specificity but markedly lower recall on speech compared to text, suggesting sensitivity to missing acoustic or prosodic cues in training. Human raters show similar uncertainty in the audio setting, underscoring the inherent ambiguity of manipulative speech. Together, these findings highlight the need for modality-aware evaluation and safety alignment in multimodal dialogue systems.
Problem

Research questions and friction points this paper is trying to address.

mental manipulation
speech
spoken dialogue
computational social reasoning
multimodal dialogue systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

mental manipulation detection
spoken dialogue
synthetic multi-speaker dataset
audio-language models
modality-aware evaluation
🔎 Similar Papers
No similar papers found.