π€ AI Summary
Implicit Sentiment Analysis (ISA) requires inferring unstated sentiment from contextual cues, yet existing large language model (LLM)-based Chain-of-Thought (CoT) methods rely on majority voting, ignoring the causal validity of reasoning paths and thus remaining vulnerable to spurious correlations and internal biases. To address this, we propose CAPITAL, the first CoT framework incorporating Front-Door Adjustment to explicitly model the causal pathway βinput β reasoning chain β output.β CAPITAL integrates encoder-based clustering, Normalized Weighted Gaussian Mixture (NWGM) approximation, and contrastive learning to achieve causal-aware prompt optimization. Evaluated on multiple ISA benchmarks, CAPITAL significantly outperforms strong baselines, demonstrating superior robustness against adversarial perturbations and enhanced out-of-distribution generalization.
π Abstract
Implicit Sentiment Analysis (ISA) aims to infer sentiment that is implied rather than explicitly stated, requiring models to perform deeper reasoning over subtle contextual cues. While recent prompting-based methods using Large Language Models (LLMs) have shown promise in ISA, they often rely on majority voting over chain-of-thought (CoT) reasoning paths without evaluating their causal validity, making them susceptible to internal biases and spurious correlations. To address this challenge, we propose CAPITAL, a causal prompting framework that incorporates front-door adjustment into CoT reasoning. CAPITAL decomposes the overall causal effect into two components: the influence of the input prompt on the reasoning chains, and the impact of those chains on the final output. These components are estimated using encoder-based clustering and the NWGM approximation, with a contrastive learning objective used to better align the encoder's representation with the LLM's reasoning space. Experiments on benchmark ISA datasets with three LLMs demonstrate that CAPITAL consistently outperforms strong prompting baselines in both accuracy and robustness, particularly under adversarial conditions. This work offers a principled approach to integrating causal inference into LLM prompting and highlights its benefits for bias-aware sentiment reasoning. The source code and case study are available at: https://github.com/whZ62/CAPITAL.