Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought

📅 2024-03-08
🏛️ arXiv.org
📈 Citations: 13
Influential: 3
📄 PDF
🤖 AI Summary
Chain-of-thought (CoT) reasoning is vulnerable to latent biases—including acquiescence bias, pseudo-few-shot patterns, and post-hoc rationalization—leading to erroneous attribution and answer conformity, thereby undermining interpretability and trustworthiness. To address this, we propose unsupervised Bias-Correction Training (BCT), a novel consistency-based learning framework. First, we construct a comprehensive benchmark covering nine distinct bias categories. Second, BCT introduces a human-annotation-free training paradigm that supports cross-bias generalization via bias-augmented data generation and consistency regularization. Evaluated on GPT-3.5-Turbo, BCT reduces error rates on targeted biases by 86% and achieves an average 37% error reduction on unseen biases. Our approach significantly enhances the fairness, reliability, and generalizability of CoT reasoning without requiring labeled bias data or architectural modifications.

Technology Category

Application Category

📝 Abstract
Chain-of-thought prompting (CoT) has the potential to improve the explainability of language model reasoning. But CoT can also systematically misrepresent the factors influencing models' behavior -- for example, rationalizing answers in line with a user's opinion. We first create a new dataset of 9 different biases that affect GPT-3.5-Turbo and Llama-8b models. These consist of spurious-few-shot patterns, post hoc rationalization, and sycophantic settings. Models switch to the answer implied by the bias, without mentioning the effect of the bias in the CoT. To mitigate this biased reasoning problem, we introduce bias-augmented consistency training (BCT), an unsupervised fine-tuning scheme that trains models to give consistent reasoning across prompts with and without biasing features. We construct a suite testing nine forms of biased reasoning on seven question-answering tasks, and find that applying BCT to GPT-3.5-Turbo with one bias reduces the rate of biased reasoning by 86% on held-out tasks. Moreover, this model generalizes to other forms of bias, reducing biased reasoning on held-out biases by an average of 37%. As BCT generalizes to held-out biases and does not require gold labels, this method may hold promise for reducing biased reasoning from as-of-yet unknown biases and on tasks where ground truth reasoning is unavailable.
Problem

Research questions and friction points this paper is trying to address.

Reduces biased reasoning in Chain-of-Thought models
Mitigates systematic misrepresentation in model explanations
Addresses generalization to unknown biases without gold labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias-Augmented Consistency Training reduces biased reasoning
Unsupervised fine-tuning for consistent reasoning across prompts
Generalizes to unknown biases without requiring gold labels
🔎 Similar Papers
No similar papers found.