Cognitive Biases in LLM-Assisted Software Development

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the mechanisms, types, and impacts of cognitive biases in large language model (LLM)-assisted software development. Employing a mixed-methods approach—including observational experiments with 14 developers, a survey of 22 participants, qualitative coding, and validation by psychologists—the research establishes the first comprehensive taxonomy of cognitive biases in this context, comprising 15 categories encompassing 90 specific bias instances. The findings reveal that 48.8% of developer behaviors are influenced by cognitive biases, with 56.4% of these cases directly attributable to human–LLM interaction. This taxonomy provides a theoretical foundation and actionable mitigation strategies for the design of LLM-augmented programming tools and development practices.

Technology Category

Application Category

📝 Abstract
The widespread adoption of Large Language Models (LLMs) in software development is transforming programming from a solution-generative to a solution-evaluative activity. This shift opens a pathway for new cognitive challenges that amplify existing decision-making biases or create entirely novel ones. One such type of challenge stems from cognitive biases, which are thinking patterns that lead people away from logical reasoning and result in sub-optimal decisions. How do cognitive biases manifest and impact decision-making in emerging AI-collaborative development? This paper presents the first comprehensive study of cognitive biases in LLM-assisted development. We employ a mixed-methods approach, combining observational studies with 14 student and professional developers, followed by surveys with 22 additional developers. We qualitatively compare categories of biases affecting developers against the traditional non-LLM workflows. Our findings suggest that LLM-related actions are more likely to be associated with novel biases. Through a systematic analysis of 90 cognitive biases specific to developer-LLM interactions, we develop a taxonomy of 15 bias categories validated by cognitive psychologists. We found that 48.8% of total programmer actions are biased, and developer-LLM interactions account for 56.4% of these biased actions. We discuss how these bias categories manifest, present tools and practices for developers, and recommendations for LLM tool builders to help mitigate cognitive biases in human-AI programming.
Problem

Research questions and friction points this paper is trying to address.

cognitive biases
LLM-assisted software development
developer decision-making
human-AI collaboration
software engineering
Innovation

Methods, ideas, or system contributions that make the work stand out.

cognitive biases
LLM-assisted development
human-AI collaboration
developer decision-making
bias taxonomy
Xinyi Zhou
Xinyi Zhou
Master Student, University of Southern California
Large Language ModelsSoftware EngineeringUser Experience
Z
Zeinadsadat Saghi
University of Southern California, Los Angeles, California, USA
Sadra Sabouri
Sadra Sabouri
University of Southern California
HCINLPLLMSE
Rahul Pandita
Rahul Pandita
Staff Researcher, Github Inc.
Developer ProductivityHCIAutomated Software EngineeringSoftware Security
M
Mollie McGuire
Naval Postgraduate School, Monterey, California, USA
S
Souti Chattopadhyay
University of Southern California, Los Angeles, California, USA