๐ค AI Summary
This work investigates whether lightweight reasoning augmentation can enhance cognitive restructuring in cognitive behavioral therapy (CBT) using large language models (LLMs), specifically targeting identification, generation, and restructuring of maladaptive thoughts. We propose a reasoning optimization framework grounded in chain-of-thought (CoT) prompting and self-consistency, enabling low-cost deployment on off-the-shelf LLMs such as GPT-3.5. Our experiments provide the first systematic evaluation demonstrating that this approach outperforms state-of-the-art pre-trained reasoning models across all three core CBT tasksโachieving significant improvements in restructuring accuracy and clinical plausibility. Crucially, we establish that specialized architectures or extensive fine-tuning are unnecessary: interpretable, reusable prompting strategies alone suffice to robustly augment critical CBT processes. This work contributes both a novel paradigm for AI-augmented psychological intervention and empirical validation of reasoning-aware prompting as an effective, accessible mechanism for clinical NLP applications.
๐ Abstract
Cognitive Reframing, a core element of Cognitive Behavioral Therapy (CBT), helps individuals reinterpret negative experiences by finding positive meaning. Recent advances in Large Language Models (LLMs) have demonstrated improved performance through reasoning-based strategies. This inspires a promising direction of leveraging the reasoning capabilities of LLMs to improve CBT and mental reframing by simulating the process of critical thinking, potentially enabling more effective recognition, generation, and reframing of cognitive distortions. In this work, we investigate the role of various reasoning methods, including pre-trained reasoning LLMs and augmented reasoning strategies such as CoT and self-consistency in enhancing LLMs' ability to perform cognitive reframing tasks. We find that augmented reasoning methods, even when applied to"outdated"LLMs like GPT-3.5, consistently outperform state-of-the-art pretrained reasoning models on recognizing, generating and reframing unhelpful thoughts.