AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights

πŸ“… 2025-08-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper identifies a significant self-preference bias in large language models (LLMs) deployed in hiringβ€”where LLMs systematically favor resumes generated by the same model (e.g., when both resume optimization and screening are performed by identical LLMs), leading to the undervaluation of human-written resumes. Method: Through large-scale controlled experiments, cross-model comparisons (spanning major commercial and open-source LLMs), labor-market workflow simulations, and targeted intervention studies, the study empirically validates this bias in realistic recruitment settings. Contribution/Results: All evaluated models exhibit 68%–88% preference for same-origin AI-generated resumes; shortlisting probability for same-model AI users increases by 23%–60%. Fairness degradation is especially pronounced in sales and accounting roles. The work introduces a novel intervention leveraging model self-identification capability, reducing bias by over 50%, thereby extending AI fairness research to the underexplored AI–AI interaction dimension.

Technology Category

Application Category

πŸ“ Abstract
As generative artificial intelligence (AI) tools become widely adopted, large language models (LLMs) are increasingly involved on both sides of decision-making processes, ranging from hiring to content moderation. This dual adoption raises a critical question: do LLMs systematically favor content that resembles their own outputs? Prior research in computer science has identified self-preference bias -- the tendency of LLMs to favor their own generated content -- but its real-world implications have not been empirically evaluated. We focus on the hiring context, where job applicants often rely on LLMs to refine resumes, while employers deploy them to screen those same resumes. Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 68% to 88% across major commercial and open-source models. To assess labor market impact, we simulate realistic hiring pipelines across 24 occupations. These simulations show that candidates using the same LLM as the evaluator are 23% to 60% more likely to be shortlisted than equally qualified applicants submitting human-written resumes, with the largest disadvantages observed in business-related fields such as sales and accounting. We further demonstrate that this bias can be reduced by more than 50% through simple interventions targeting LLMs' self-recognition capabilities. These findings highlight an emerging but previously overlooked risk in AI-assisted decision making and call for expanded frameworks of AI fairness that address not only demographic-based disparities, but also biases in AI-AI interactions.
Problem

Research questions and friction points this paper is trying to address.

LLMs favor their own generated resumes in hiring
Self-preference bias creates unfair advantages for AI-assisted applicants
AI-AI interactions introduce new fairness risks in employment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs prefer self-generated resumes over human-written ones
Simulated hiring pipelines show significant candidate shortlisting bias
Interventions reduce self-preference bias by over 50%
πŸ”Ž Similar Papers
Jiannan Xu
Jiannan Xu
Ph.D. Candidate, Robert H. Smith School of Business, University of Maryland
Marketplace AnalyticsService OperationsAI for Social Good
G
Gujie Li
School of Computing, National University of Singapore, Singapore 117417
J
Jane Yi Jiang
Max M. Fisher College of Business, The Ohio State University, OH 43210, United States