🤖 AI Summary
In open-domain tasks, conventional chain-of-thought (CoT) reasoning suffers from insufficient structural guidance, limiting its effectiveness. To address this, we propose Chain-of-Concept Thinking (CoCT), a novel prompting paradigm that decomposes reasoning into two sequential stages: (1) identifying core conceptual elements—such as emotion, strategy, and topic—and (2) generating a response grounded in these concepts. This enforces internally coherent, logically structured inference paths within dialogues. Implemented purely via prompt engineering—without model fine-tuning—CoCT significantly enhances large language models’ deep, strategic reasoning capabilities in emotion-support and everyday conversational settings. We evaluate CoCT using automated metrics, human judgments, and model-based assessments across multiple benchmarks, consistently outperforming strong baselines including Self-Refine, ECoT, Tree-of-Thought (ToT), Summary-of-Thought (SoT), and RAG. Results demonstrate that CoCT is a lightweight, general-purpose, and highly effective prompting framework for open-domain reasoning.
📝 Abstract
Chain-of-Thought (CoT) is widely applied to improve the LLM capability in math, coding and reasoning tasks. However, its performance is limited for open-domain tasks since there are no clearly defined reasoning steps or logical transitions. To mitigate such challenges, we propose another prompt-based paradigm called Chain of Conceptual Thought (CoCT), where the LLM first tags a concept, then generates the detailed content. The chain of concepts is allowed within the utterance, encouraging the LLM's deep and strategic thinking. We experiment with this paradigm in daily and emotional support conversations where the concept is comprised of emotions, strategies and topics. Automatic, human and model evaluations suggest that CoCT surpasses baselines such as Self-Refine, ECoT, ToT, SoT and RAG, suggesting a potential effective prompt-based paradigm of LLM for a wider scope of tasks.