🤖 AI Summary
This study investigates the efficacy of large language models (LLMs) in zero-shot financial sentiment analysis, specifically examining whether reasoning mechanisms—such as chain-of-thought (CoT) prompting or built-in reasoning architectures—enhance alignment with human expert annotations. Method: Using the Financial PhraseBank dataset, we benchmark closed-source LLMs (GPT-4o, GPT-4.1, o3-mini) against fine-tuned smaller models (FinBERT-Prosus, Tone) under System 1 (intuitive) and System 2 (deliberative) prompting paradigms. Contribution/Results: We report the first empirical evidence that explicit reasoning degrades both accuracy and human annotation consistency—a phenomenon we term “over-reasoning.” GPT-4o achieves optimal performance without CoT prompting. Linguistic complexity and annotation disagreement emerge as key moderating variables. Our findings demonstrate that intuitive, rapid responses better align with human judgment in financial contexts, challenging prevailing assumptions about reasoning’s universal utility and offering methodological guidance for deploying LLMs in domain-specific sentiment analysis.
📝 Abstract
We investigate the effectiveness of large language models (LLMs), including reasoning-based and non-reasoning models, in performing zero-shot financial sentiment analysis. Using the Financial PhraseBank dataset annotated by domain experts, we evaluate how various LLMs and prompting strategies align with human-labeled sentiment in a financial context. We compare three proprietary LLMs (GPT-4o, GPT-4.1, o3-mini) under different prompting paradigms that simulate System 1 (fast and intuitive) or System 2 (slow and deliberate) thinking and benchmark them against two smaller models (FinBERT-Prosus, FinBERT-Tone) fine-tuned on financial sentiment analysis. Our findings suggest that reasoning, either through prompting or inherent model design, does not improve performance on this task. Surprisingly, the most accurate and human-aligned combination of model and method was GPT-4o without any Chain-of-Thought (CoT) prompting. We further explore how performance is impacted by linguistic complexity and annotation agreement levels, uncovering that reasoning may introduce overthinking, leading to suboptimal predictions. This suggests that for financial sentiment classification, fast, intuitive"System 1"-like thinking aligns more closely with human judgment compared to"System 2"-style slower, deliberative reasoning simulated by reasoning models or CoT prompting. Our results challenge the default assumption that more reasoning always leads to better LLM decisions, particularly in high-stakes financial applications.