Does"Reasoning"with Large Language Models Improve Recognizing, Generating, and Reframing Unhelpful Thoughts?

๐Ÿ“… 2025-03-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates whether lightweight reasoning augmentation can enhance cognitive restructuring in cognitive behavioral therapy (CBT) using large language models (LLMs), specifically targeting identification, generation, and restructuring of maladaptive thoughts. We propose a reasoning optimization framework grounded in chain-of-thought (CoT) prompting and self-consistency, enabling low-cost deployment on off-the-shelf LLMs such as GPT-3.5. Our experiments provide the first systematic evaluation demonstrating that this approach outperforms state-of-the-art pre-trained reasoning models across all three core CBT tasksโ€”achieving significant improvements in restructuring accuracy and clinical plausibility. Crucially, we establish that specialized architectures or extensive fine-tuning are unnecessary: interpretable, reusable prompting strategies alone suffice to robustly augment critical CBT processes. This work contributes both a novel paradigm for AI-augmented psychological intervention and empirical validation of reasoning-aware prompting as an effective, accessible mechanism for clinical NLP applications.

Technology Category

Application Category

๐Ÿ“ Abstract
Cognitive Reframing, a core element of Cognitive Behavioral Therapy (CBT), helps individuals reinterpret negative experiences by finding positive meaning. Recent advances in Large Language Models (LLMs) have demonstrated improved performance through reasoning-based strategies. This inspires a promising direction of leveraging the reasoning capabilities of LLMs to improve CBT and mental reframing by simulating the process of critical thinking, potentially enabling more effective recognition, generation, and reframing of cognitive distortions. In this work, we investigate the role of various reasoning methods, including pre-trained reasoning LLMs and augmented reasoning strategies such as CoT and self-consistency in enhancing LLMs' ability to perform cognitive reframing tasks. We find that augmented reasoning methods, even when applied to"outdated"LLMs like GPT-3.5, consistently outperform state-of-the-art pretrained reasoning models on recognizing, generating and reframing unhelpful thoughts.
Problem

Research questions and friction points this paper is trying to address.

Enhancing CBT using LLMs for cognitive reframing
Evaluating reasoning methods to improve thought recognition
Comparing reasoning strategies for generating helpful thoughts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging LLMs for cognitive reframing tasks
Using augmented reasoning like CoT strategies
Applying reasoning to outdated models like GPT-3.5
๐Ÿ”Ž Similar Papers
No similar papers found.