You Never Know a Person, You Only Know Their Defenses: Detecting Levels of Psychological Defense Mechanisms in Supportive Conversations

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of reliably quantifying psychological defense mechanisms (DMs) in supportive clinical dialogues. We introduce PsyDefConv, the first fine-grained, dialogue-level annotated corpus (200 dialogues, 4,709 turns), and propose DMRS Co-Pilot—a four-stage AI-assisted annotation system integrating zero-shot/fine-tuned LLM classification, multi-stage prompt engineering, and human-in-the-loop collaboration to ensure interpretability and clinical validity. Inter-annotator agreement reached κ = 0.639 (Cohen’s kappa), with expert evaluations scoring ≥4.4/7 on clinical credibility. Co-Pilot improved annotation efficiency by 22.4%. Current state-of-the-art models achieve only ~30% macro-F1, underscoring task difficulty. To our knowledge, this work is the first to enable structured, reproducible, and clinically deployable measurement of DMs in naturalistic clinical conversation.

Technology Category

Application Category

📝 Abstract
Psychological defenses are strategies, often automatic, that people use to manage distress. Rigid or overuse of defenses is negatively linked to mental health and shapes what speakers disclose and how they accept or resist help. However, defenses are complex and difficult to reliably measure, particularly in clinical dialogues. We introduce PsyDefConv, a dialogue corpus with help seeker utterances labeled for defense level, and DMRS Co-Pilot, a four-stage pipeline that provides evidence-based pre-annotations. The corpus contains 200 dialogues and 4709 utterances, including 2336 help seeker turns, with labeling and Cohen's kappa 0.639. In a counterbalanced study, the co-pilot reduced average annotation time by 22.4%. In expert review, it averaged 4.62 for evidence, 4.44 for clinical plausibility, and 4.40 for insight on a seven-point scale. Benchmarks with strong language models in zero-shot and fine-tuning settings demonstrate clear headroom, with the best macro F1-score around 30% and a tendency to overpredict mature defenses. Corpus analyses confirm that mature defenses are most common and reveal emotion-specific deviations. We will release the corpus, annotations, code, and prompts to support research on defensive functioning in language.
Problem

Research questions and friction points this paper is trying to address.

Measuring psychological defense mechanisms in clinical dialogues reliably
Reducing annotation time for defense level labeling in conversations
Improving automated detection of defense mechanisms using language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed PsyDefConv corpus with defense-level labeled dialogues
Created DMRS Co-Pilot pipeline for evidence-based pre-annotations
Achieved 22.4% annotation time reduction through automated assistance
🔎 Similar Papers
No similar papers found.