🤖 AI Summary
Large language models exhibit limited capability in causal reasoning tasks—particularly counterfactual question answering—due to inherent biases and insufficient grounding in causal mechanisms.
Method: We propose a novel paradigm for enhancing causal reasoning: (1) introducing CausalQA-Balanced, the first evaluation metric jointly optimizing factual and counterfactual accuracy to quantify reasoning bias; and (2) designing a causal-mechanism-inspired fine-tuning strategy integrating counterfactual question generation, multi-objective supervised fine-tuning, and feedback-driven optimization.
Contribution/Results: Our approach significantly improves model accuracy on counterfactual QA and strengthens generalization across inductive, deductive, and cross-task causal reasoning. Extensive experiments demonstrate systematic superiority over state-of-the-art baselines across multiple real-world scenarios, establishing a new benchmark for causally grounded language understanding.
📝 Abstract
Despite the increasing effectiveness of language models, their reasoning capabilities remain underdeveloped. In particular, causal reasoning through counterfactual question answering is lacking. This work aims to bridge this gap. We first derive novel metrics that balance accuracy in factual and counterfactual questions, capturing a more complete view of the reasoning abilities of language models than traditional factual-only based metrics. Second, we propose several fine-tuning approaches that aim to elicit better reasoning mechanisms, in the sense of the proposed metrics. Finally, we evaluate the performance of the fine-tuned language models in a variety of realistic scenarios. In particular, we investigate to what extent our fine-tuning approaches systemically achieve better generalization with respect to the base models in several problems that require, among others, inductive and deductive reasoning capabilities.