Mitigating Selection Bias with Node Pruning and Auxiliary Options

📅 2024-09-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic selection bias in multiple-choice question answering, undermining their decision reliability. To address this, we propose a dual-path debiasing framework grounded in internal model mechanisms: (1) Bias Node Pruning (BNP), a novel parameter-level pruning technique that suppresses bias-inducing neurons; and (2) Auxiliary Option Injection (AOI), a white-box- and black-box-compatible method that injects auxiliary options to recalibrate the output probability distribution. We further introduce Choice KL Divergence (CKLD), the first metric enabling quantitative, option-level characterization of selection bias. Extensive experiments across three major LLM families and multiple benchmark datasets demonstrate significant improvements in accuracy and substantial bias reduction. Crucially, our approach exhibits strong robustness across both open-weight and proprietary closed-models, without requiring architectural modifications or retraining.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often exhibit systematic preferences for certain answer choices when responding to multiple-choice questions-a behavior known as selection bias. This bias reduces the accuracy and reliability of LLM outputs, limiting their usefulness in decision-critical applications. While prior work has focused on adjusting model inputs or outputs to mitigate this issue, our work takes a fundamentally different approach by identifying and removing the internal sources of bias. We introduce two methods: Bias Node Pruning (BNP), which prunes parameters that contribute to selection bias, and Auxiliary Option Injection (AOI), which introduces an additional answer choice to reduce bias in both white-box and black-box settings. To address the shortcomings of existing evaluation metrics, we propose Choice Kullback-Leibler Divergence (CKLD), a new metric that captures distributional imbalances in model predictions. Experiments on three LLMs across multiple datasets demonstrate that our methods consistently improve answer accuracy while reducing selection bias, providing a robust solution for both open- and closed-source models.
Problem

Research questions and friction points this paper is trying to address.

LLMs show systematic answer preferences in multiple-choice questions
Selection bias reduces accuracy and reliability of LLM outputs
Existing metrics fail to capture prediction imbalances effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias Node Pruning removes bias-contributing parameters
Auxiliary Option Injection adds extra choice to reduce bias
Choice KL Divergence measures prediction imbalance effectively
🔎 Similar Papers
No similar papers found.