🤖 AI Summary
Current large language models lack structured evaluation of core therapeutic principles in mental health conversations, compromising clinical appropriateness. To address this gap, this work introduces FAITH-M—the first expert-annotated benchmark grounded in established therapeutic principles—and CARE, a multi-stage evaluation framework that enables precise assessment of AI therapist responses through fine-grained ordinal scoring, context-aware analysis, contrastive example retrieval, and chain-of-thought knowledge distillation. Using Qwen3 as the backbone model, CARE achieves an F1 score of 63.34, representing a 64.26% improvement over baseline methods, and demonstrates strong robustness across diverse datasets and expert evaluations.
📝 Abstract
The increasing use of large language models in mental health applications calls for principled evaluation frameworks that assess alignment with psychotherapeutic best practices beyond surface-level fluency. While recent systems exhibit conversational competence, they lack structured mechanisms to evaluate adherence to core therapeutic principles. In this paper, we study the problem of evaluating AI-generated therapist-like responses for clinically grounded appropriateness and effectiveness. We assess each therapists utterance along six therapeutic principles: non-judgmental acceptance, warmth, respect for autonomy, active listening, reflective understanding, and situational appropriateness using a fine-grained ordinal scale. We introduce FAITH-M, a benchmark annotated with expert-assigned ordinal ratings, and propose CARE, a multi-stage evaluation framework that integrates intra-dialogue context, contrastive exemplar retrieval, and knowledge-distilled chain-of-thought reasoning. Experiments show that CARE achieves an F-1 score of 63.34 versus the strong baseline Qwen3 F-1 score of 38.56 which is a 64.26 improvement, which also serves as its backbone, indicating that gains arise from structured reasoning and contextual modeling rather than backbone capacity alone. Expert assessment and external dataset evaluations further demonstrate robustness under domain shift, while highlighting challenges in modelling implicit clinical nuance. Overall, CARE provides a clinically grounded framework for evaluating therapeutic fidelity in AI mental health systems.