Once Upon a Team: Investigating Bias in LLM-Driven Software Team Composition and Task Allocation

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) may introduce systematic biases in socially sensitive software engineering decisions, such as team formation and task assignment. It presents the first joint analysis of the interactive effects of multiple demographic attributes—including nationality and pronouns—on LLM-driven recommendations. Through 3,000 controlled simulation trials and quantitative bias assessments, the fairness of three leading LLMs is evaluated in assigning technical and leadership roles. Findings reveal that even when professional competence is held constant, demographic attributes significantly influence both selection likelihood and role type, exposing implicit stereotypes that risk exacerbating group-based inequities in software development. These results provide empirical grounding for designing more equitable AI systems in collaborative engineering contexts.

Technology Category

Application Category

📝 Abstract
LLMs are increasingly used to boost productivity and support software engineering tasks. However, when applied to socially sensitive decisions such as team composition and task allocation, they raise concerns of fairness. Prior studies have revealed that LLMs may reproduce stereotypes; however, these analyses remain exploratory and examine sensitive attributes in isolation. This study investigates whether LLMs exhibit bias in team composition and task assignment by analyzing the combined effects of candidates'country and pronouns. Using three LLMs and 3,000 simulated decisions, we find systematic disparities: demographic attributes significantly shaped both selection likelihood and task allocation, even when accounting for expertise-related factors. Task distributions further reflected stereotypes, with technical and leadership roles unevenly assigned across groups. Our findings indicate that LLMs exacerbate demographic inequities in software engineering contexts, underscoring the need for fairness-aware assessment.
Problem

Research questions and friction points this paper is trying to address.

bias
LLM
team composition
task allocation
fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

bias in LLMs
team composition
task allocation
fairness-aware AI
demographic disparities
🔎 Similar Papers
No similar papers found.