Chain-of-Conceptual-Thought: Eliciting the Agent to Deeply Think within the Response

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In open-domain tasks, conventional chain-of-thought (CoT) reasoning suffers from insufficient structural guidance, limiting its effectiveness. To address this, we propose Chain-of-Concept Thinking (CoCT), a novel prompting paradigm that decomposes reasoning into two sequential stages: (1) identifying core conceptual elements—such as emotion, strategy, and topic—and (2) generating a response grounded in these concepts. This enforces internally coherent, logically structured inference paths within dialogues. Implemented purely via prompt engineering—without model fine-tuning—CoCT significantly enhances large language models’ deep, strategic reasoning capabilities in emotion-support and everyday conversational settings. We evaluate CoCT using automated metrics, human judgments, and model-based assessments across multiple benchmarks, consistently outperforming strong baselines including Self-Refine, ECoT, Tree-of-Thought (ToT), Summary-of-Thought (SoT), and RAG. Results demonstrate that CoCT is a lightweight, general-purpose, and highly effective prompting framework for open-domain reasoning.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) is widely applied to improve the LLM capability in math, coding and reasoning tasks. However, its performance is limited for open-domain tasks since there are no clearly defined reasoning steps or logical transitions. To mitigate such challenges, we propose another prompt-based paradigm called Chain of Conceptual Thought (CoCT), where the LLM first tags a concept, then generates the detailed content. The chain of concepts is allowed within the utterance, encouraging the LLM's deep and strategic thinking. We experiment with this paradigm in daily and emotional support conversations where the concept is comprised of emotions, strategies and topics. Automatic, human and model evaluations suggest that CoCT surpasses baselines such as Self-Refine, ECoT, ToT, SoT and RAG, suggesting a potential effective prompt-based paradigm of LLM for a wider scope of tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM thinking for open-domain tasks without clear reasoning steps
Improving conversational quality in daily and emotional support dialogues
Developing strategic thinking through conceptual chains within LLM responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Chain of Conceptual Thought paradigm
Tags concepts before generating detailed content
Enhances LLM strategic thinking in open-domain tasks
🔎 Similar Papers
No similar papers found.
Q
Qingqing Gu
Geely AI Lab, Beijing, China
D
Dan Wang
Geely AI Lab, Beijing, China
Y
Yue Zhao
Geely AI Lab, Beijing, China
X
Xiaoyu Wang
Geely AI Lab, Beijing, China; Beijing Institute of Technology, Beijing, China
Z
Zhonglin Jiang
Geely AI Lab, Beijing, China
Y
Yong Chen
Geely AI Lab, Beijing, China
H
Hongyan Li
Geely AI Lab, Beijing, China
Luo Ji
Luo Ji
Alibaba Group
Reinforcement LearningAutomatic Control