FinCoT: Grounding Chain-of-Thought in Expert Financial Reasoning

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional chain-of-thought (CoT) prompting in Financial Natural Language Processing (FinNLP) suffers from insufficient domain-specific logical constraints, weak interpretability, and misalignment with expert reasoning patterns. To address this, we propose FinCoT—the first structured, CFA-level financial expert–driven CoT prompting framework. FinCoT systematically models domain experts’ decision-making pathways, formalizing a multi-stage, verifiable reasoning process that fundamentally departs from heuristic or unstructured CoT designs. Evaluated on models including Qwen-2.5-7B-Instruct, FinCoT achieves 80.5% accuracy across ten CFA-style financial tasks—a +17.3 percentage point improvement—while reducing inference token consumption by 8×. Moreover, its generated reasoning traces exhibit stronger alignment with real-world financial practice, enhanced traceability, and improved fidelity to expert judgment.

Technology Category

Application Category

📝 Abstract
This paper presents FinCoT, a structured chain-of-thought (CoT) prompting approach that incorporates insights from domain-specific expert financial reasoning to guide the reasoning traces of large language models. We investigate that there are three main prompting styles in FinNLP: (1) standard prompting--zero-shot prompting; (2) unstructured CoT--CoT prompting without an explicit reasoning structure, such as the use of tags; and (3) structured CoT prompting--CoT prompting with explicit instructions or examples that define structured reasoning steps. Previously, FinNLP has primarily focused on prompt engineering with either standard or unstructured CoT prompting. However, structured CoT prompting has received limited attention in prior work. Furthermore, the design of reasoning structures in structured CoT prompting is often based on heuristics from non-domain experts. In this study, we investigate each prompting approach in FinNLP. We evaluate the three main prompting styles and FinCoT on CFA-style questions spanning ten financial domains. We observe that FinCoT improves performance from 63.2% to 80.5% and Qwen-2.5-7B-Instruct from 69.7% to 74.2%, while reducing generated tokens eight-fold compared to structured CoT prompting. Our findings show that domain-aligned structured prompts not only improve performance and reduce inference costs but also yield more interpretable and expert-aligned reasoning traces.
Problem

Research questions and friction points this paper is trying to address.

Enhancing financial reasoning in LLMs with expert-guided structured CoT
Evaluating prompting styles for financial NLP tasks
Reducing inference costs while improving model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured CoT prompting with expert financial insights
Domain-aligned reasoning steps for improved performance
Reduced inference costs and enhanced interpretability
🔎 Similar Papers
No similar papers found.