CogBias: Measuring and Mitigating Cognitive Bias in Large Language Models

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the systematic cognitive biases exhibited by large language models in high-stakes decision-making, where the underlying mechanisms and effective mitigation strategies remain poorly understood. The authors introduce CogBias, a comprehensive benchmark encompassing four bias categories—judgment, information processing, social, and response biases—and combine behavioral evaluation with internal representation analysis. They reveal, for the first time, that these biases are encoded as linearly separable directions in the model’s activation space and that bias representations across different models are approximately orthogonal. Leveraging linear probing, activation steering, and multidimensional prompting for debiasing, their approach achieves a significant 26%–32% reduction in bias while preserving downstream task performance—causing negligible degradation in Llama and at most a 19.0-percentage-point drop in Qwen—demonstrating both efficacy and generalizability of the proposed interventions.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making contexts. While prior work has shown that LLMs exhibit cognitive biases behaviorally, whether these biases correspond to identifiable internal representations and can be mitigated through targeted intervention remains an open question. We define LLM cognitive bias as systematic, reproducible deviations from correct answers in tasks with computable ground-truth baselines, and introduce LLM CogBias, a benchmark organized around four families of cognitive biases: Judgment, Information Processing, Social, and Response. We evaluate three LLMs and find that cognitive biases emerge systematically across all four families, with magnitudes and debiasing responses that are strongly family-dependent: prompt-level debiasing substantially reduces Response biases but backfires for Judgment biases. Using linear probes under a contrastive design, we show that these biases are encoded as linearly separable directions in model activation space. Finally, we apply activation steering to modulate biased behavior, achieving 26--32\% reduction in bias score (fraction of biased responses) while preserving downstream capability on 25 benchmarks (Llama: negligible degradation; Qwen: up to $-$19.0pp for Judgment biases). Despite near-orthogonal bias representations across models (mean cosine similarity 0.01), steering reduces bias at similar rates across architectures ($r(246)$=.621, $p$<.001), suggesting shared functional organization.
Problem

Research questions and friction points this paper is trying to address.

cognitive bias
large language models
bias mitigation
internal representations
systematic deviations
Innovation

Methods, ideas, or system contributions that make the work stand out.

cognitive bias
activation steering
linear probing
bias mitigation
large language models