🤖 AI Summary
This work addresses the challenges of automatic identification and interpretable modeling of psychotherapeutic orientations. We propose a dialogue-based self-play framework leveraging large language models (LLMs). Methodologically, we design a dual-role, therapy-aware dialogue mechanism that integrates prompt engineering with multidimensional evaluation metrics—namely theoretical consistency, technical behavior coverage, and clinical plausibility—to enable unsupervised deconstruction and alignment of therapeutic approaches. Innovatively, we introduce self-play to psychotherapy methodology research, enabling, for the first time, automated discovery and validation of core intervention patterns across distinct therapeutic orientations. Experiments successfully reconstruct and identify hallmark dialogue patterns for seven major evidence-based therapies—including CBT and ACT—with an expert-blind evaluation accuracy of 86.4% in modality identification. Furthermore, our analysis uncovers shared intervention logic structures across otherwise divergent therapeutic schools.
📝 Abstract
This paper explores conversational self-play with LLMs as a scalable approach for analyzing and exploring psychotherapy approaches, evaluating how well AI-generated therapeutic dialogues align with established modalities.