Gender Bias in Generative AI-assisted Recruitment Processes

๐Ÿ“… 2026-03-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study investigates how generative artificial intelligence may implicitly reinforce gender bias in recruitment, focusing on graduates under 35 in Italy. It presents the first systematic evaluation of gendered linguistic bias in career recommendations generated by the state-of-the-art large language model GPT-5. Using prompt engineering and content analysis, the authors conducted experiments with 24 simulated resumes balanced across gender, age, experience, and field of study. While the recommended job titles and industries showed no significant gender differences, the model consistently employed more affective and empathy-related language when describing female candidates, whereas descriptions of male candidates favored strategic and analytical terminology. These findings reveal latent gender stereotypes embedded in the modelโ€™s outputs, highlighting the subtle yet consequential ways in which advanced AI systems may perpetuate societal biases in high-stakes domains such as hiring.

Technology Category

Application Category

๐Ÿ“ Abstract
In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitment and analysis of candidates' profiles. However, the employment of large language models (LLMs) risks reproducing, and in some cases amplifying, gender stereotypes and bias already present in the labour market. The objective of this paper is to evaluate and measure this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background, focusing on under-35-year-old Italian graduates. The model has been prompted to suggest jobs to 24 simulated candidate profiles, which are balanced in terms of gender, age, experience and professional field. Although no significant differences emerged in job titles and industry, gendered linguistic patterns emerged in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The research raises an ethical question regarding the use of these models in sensitive processes, highlighting the need for transparency and fairness in future digital labour markets.
Problem

Research questions and friction points this paper is trying to address.

Gender Bias
Generative AI
Recruitment
Large Language Models
Stereotypes
Innovation

Methods, ideas, or system contributions that make the work stand out.

gender bias
generative AI
large language models
recruitment fairness
linguistic stereotyping