🤖 AI Summary
This study addresses a critical gap in psychological manipulation detection by extending the task beyond text to the underexplored speech modality. The authors introduce SPEECHMENTALMANIP, the first benchmark for detecting covert manipulative language in speech, constructed by augmenting existing textual datasets with high-fidelity, speaker-consistent multi-speaker synthetic audio. Systematic evaluation combines few-shot audio-language models with human annotation to assess performance. Experimental results reveal that while models achieve high specificity in identifying manipulative utterances in speech, their recall is substantially lower than in the textual domain. Human evaluators also exhibit greater uncertainty, highlighting the inherent ambiguity of manipulative cues in spoken language and fundamental differences in cross-modal perception. These findings underscore the unique challenges posed by speech-based manipulation detection and call for modality-aware approaches.
📝 Abstract
Mental manipulation, the strategic use of language to covertly influence or exploit others, is a newly emerging task in computational social reasoning. Prior work has focused exclusively on textual conversations, overlooking how manipulative tactics manifest in speech. We present the first study of mental manipulation detection in spoken dialogues, introducing a synthetic multi-speaker benchmark SPEECHMENTALMANIP that augments a text-based dataset with high-quality, voice-consistent Text-to-Speech rendered audio. Using few-shot large audio-language models and human annotation, we evaluate how modality affects detection accuracy and perception. Our results reveal that models exhibit high specificity but markedly lower recall on speech compared to text, suggesting sensitivity to missing acoustic or prosodic cues in training. Human raters show similar uncertainty in the audio setting, underscoring the inherent ambiguity of manipulative speech. Together, these findings highlight the need for modality-aware evaluation and safety alignment in multimodal dialogue systems.