🤖 AI Summary
Large language models (LLMs) often fail to generalize to out-of-distribution (OOD) samples due to spurious correlations acquired during pretraining. To address this, we propose Causal-Aware Post-Training (CAPT), a novel post-training framework grounded in structural causal modeling. CAPT decomposes prediction into two unbiased steps: estimating event probabilities and performing counterfactual interventions—without requiring OOD annotations. Leveraging event decomposition and parameter-efficient updates, CAPT mitigates pretraining biases while avoiding the introduction of new biases common in standard fine-tuning. Using only 100 in-distribution (ID) samples, CAPT fine-tunes a 3B-parameter LLM and achieves significant improvements over supervised fine-tuning (SFT) and even larger LLMs on both CLadder and PrOntoQA benchmarks. Critically, it enhances performance simultaneously on ID and OOD test sets. CAPT thus establishes a new paradigm for enhancing LLM robustness through causal reasoning, offering a lightweight, annotation-efficient, and theoretically grounded approach to distributional generalization.
📝 Abstract
While large language models (LLMs) have demonstrated remarkable capabilities in language modeling, recent studies reveal that they often fail on out-of-distribution (OOD) samples due to spurious correlations acquired during pre-training. Here, we aim to mitigate such spurious correlations through causality-aware post-training (CAPT). By decomposing a biased prediction into two unbiased steps, known as extit{event estimation} and extit{event intervention}, we reduce LLMs' pre-training biases without incurring additional fine-tuning biases, thus enhancing the model's generalization ability. Experiments on the formal causal inference benchmark CLadder and the logical reasoning dataset PrOntoQA show that 3B-scale language models fine-tuned with CAPT can outperform both traditional SFT and larger LLMs on in-distribution (ID) and OOD tasks using only 100 ID fine-tuning samples, demonstrating the effectiveness and sample efficiency of CAPT.