Cognitive Debiasing Large Language Models for Decision-Making

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of output inaccuracy in large language models (LLMs) deployed in high-stakes decision-making domains—such as finance, healthcare, and law—caused by *co-occurring cognitive biases* of multiple types. To tackle this, we propose an *iterative self-debiasing framework* comprising three dynamic stages: chain-of-thought–based bias detection, causality-informed bias attribution, and multi-round reflective prompt rewriting. The framework operates end-to-end, requires *no human annotation of bias types*, and is compatible with both closed- and open-weight LLMs. Unlike existing methods assuming only a single bias source, ours is the first to support scenarios with *multiple concurrent biases*. Experiments across financial, medical, and legal decision tasks demonstrate that our approach consistently outperforms state-of-the-art prompting and debiasing baselines under unbiased, single-bias, and multi-bias settings—achieving substantial and robust improvements in average accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown potential in supporting decision-making applications, particularly as personal conversational assistants in the financial, healthcare, and legal domains. While prompt engineering strategies have enhanced the capabilities of LLMs in decision-making, cognitive biases inherent to LLMs present significant challenges. Cognitive biases are systematic patterns of deviation from norms or rationality in decision-making that can lead to the production of inaccurate outputs. Existing cognitive bias mitigation strategies assume that input prompts contain (exactly) one type of cognitive bias and therefore fail to perform well in realistic settings where there maybe any number of biases. To fill this gap, we propose a cognitive debiasing approach, called self-debiasing, that enhances the reliability of LLMs by iteratively refining prompts. Our method follows three sequential steps -- bias determination, bias analysis, and cognitive debiasing -- to iteratively mitigate potential cognitive biases in prompts. Experimental results on finance, healthcare, and legal decision-making tasks, using both closed-source and open-source LLMs, demonstrate that the proposed self-debiasing method outperforms both advanced prompt engineering methods and existing cognitive debiasing techniques in average accuracy under no-bias, single-bias, and multi-bias settings.
Problem

Research questions and friction points this paper is trying to address.

Mitigating cognitive biases in LLMs for decision-making
Addressing multiple biases in realistic prompt scenarios
Enhancing LLM reliability through iterative self-debiasing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-debiasing method for LLM cognitive bias mitigation
Iterative prompt refinement via three-step process
Outperforms existing techniques in multi-bias settings
🔎 Similar Papers
No similar papers found.
Yougang Lyu
Yougang Lyu
University of Amsterdam
Natural Language ProcessingLarge Language ModelsInformation Retrieval
S
Shijie Ren
Shandong University
Y
Yue Feng
University of Birmingham
Z
Zihan Wang
University of Amsterdam
Zhumin Chen
Zhumin Chen
Shandong University
Zhaochun Ren
Zhaochun Ren
Leiden University
Information retrievalNatural language processing
M
M. D. Rijke
University of Amsterdam