What Does a Software Engineer Look Like? Exploring Societal Stereotypes in LLMs

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates cross-gender and cross-racial bias in GPT-4 and Microsoft Copilot within software engineering (SE) recruitment. To address the lack of standardized, reproducible benchmarks for occupational bias assessment, we construct a rigorous, multimodal evaluation framework comprising 300 structured candidate profiles. Our methodology integrates textual recommendation analysis with generative image synthesis (via DALL·E/Image Creator), augmented by prompt engineering, multi-role task design, text-based preference modeling, and statistical demographic analysis of generated images. Results reveal significant model biases: both systems consistently favor male and Caucasian candidates—especially for senior roles—and generate images disproportionately depicting light-skinned, young, and slender individuals, thereby reinforcing harmful stereotypes. This work uncovers how large language models may exacerbate diversity gaps in SE hiring and introduces the first occupation-oriented, multimodal social bias quantification framework for evaluating algorithmic fairness in professional contexts.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have rapidly gained popularity and are being embedded into professional applications due to their capabilities in generating human-like content. However, unquestioned reliance on their outputs and recommendations can be problematic as LLMs can reinforce societal biases and stereotypes. This study investigates how LLMs, specifically OpenAI's GPT-4 and Microsoft Copilot, can reinforce gender and racial stereotypes within the software engineering (SE) profession through both textual and graphical outputs. We used each LLM to generate 300 profiles, consisting of 100 gender-based and 50 gender-neutral profiles, for a recruitment scenario in SE roles. Recommendations were generated for each profile and evaluated against the job requirements for four distinct SE positions. Each LLM was asked to select the top 5 candidates and subsequently the best candidate for each role. Each LLM was also asked to generate images for the top 5 candidates, providing a dataset for analysing potential biases in both text-based selections and visual representations. Our analysis reveals that both models preferred male and Caucasian profiles, particularly for senior roles, and favoured images featuring traits such as lighter skin tones, slimmer body types, and younger appearances. These findings highlight underlying societal biases influence the outputs of LLMs, contributing to narrow, exclusionary stereotypes that can further limit diversity and perpetuate inequities in the SE field. As LLMs are increasingly adopted within SE research and professional practices, awareness of these biases is crucial to prevent the reinforcement of discriminatory norms and to ensure that AI tools are leveraged to promote an inclusive and equitable engineering culture rather than hinder it.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Bias
Software Engineering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias Evaluation
Large Language Models
Diversity in Software Engineering
🔎 Similar Papers
No similar papers found.