Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment

📅 2024-07-31
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic negative bias in binary decision tasks requiring complex reasoning, leading to significant precision–recall imbalance. This paper introduces the Negative Attention Score (NAS) — the first quantitative metric for measuring such bias — and identifies the attention heads predominantly responsible for it. We then propose NASA, a parameter-efficient fine-tuning method based on LoRA-style adapters, enabling targeted calibration of these “negative-biased” heads. Our approach integrates attention mechanism analysis, NAS-based modeling, and a multi-domain reasoning evaluation framework spanning mathematical, commonsense, and symbolic reasoning tasks. NASA substantially reduces the precision–recall gap while preserving or improving overall accuracy and cross-task generalization. Key contributions include: (1) a formal, systematic definition of NAS; (2) interpretable localization of bias-inducing attention heads; and (3) a lightweight, transferable paradigm for targeted bias mitigation.

Technology Category

Application Category

📝 Abstract
A binary decision task, like yes-no questions or answer verification, reflects a significant real-world scenario such as where users look for confirmation about the correctness of their decisions on specific issues. In this work, we observe that language models exhibit a negative bias in the binary decisions of complex reasoning tasks. Based on our observations and the rationale about attention-based model dynamics, we propose a negative attention score (NAS) to systematically and quantitatively formulate negative bias. Based on NAS, we identify attention heads that attend to negative tokens provided in the instructions as answer candidate of binary decisions, regardless of the question in the prompt, and validate their association with the negative bias. Additionally, we propose the negative attention score alignment (NASA) method, which is a parameter-efficient fine-tuning technique to address the extracted negatively biased attention heads. Experimental results from various domains of reasoning tasks and large model search space demonstrate that NASA significantly reduces the gap between precision and recall caused by negative bias while preserving their generalization abilities.
Problem

Research questions and friction points this paper is trying to address.

Language models show negative bias in binary decision tasks
Negative attention score (NAS) quantifies and identifies biased attention heads
Proposed NASA method reduces bias while maintaining model generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Negative Attention Score (NAS) for bias quantification
Identifies negatively biased attention heads in models
Introduces NASA method for parameter-efficient bias correction
🔎 Similar Papers
No similar papers found.