🤖 AI Summary
This study addresses the unclear ways in which large language models (LLMs) weigh candidate attributes, align with human preferences, and potentially exhibit implicit biases in hiring decisions. Introducing, for the first time, a full-factorial experimental design from economics—commonly used to analyze human hiring behavior—into LLM research, the authors construct a synthetic dataset based on real freelancing profiles to systematically evaluate how models implicitly assign weights to matching criteria such as skills and experience. The analysis further examines fairness across different project contexts and demographic groups. Findings indicate that LLMs primarily rely on core productivity signals and show no significant group-level discrimination overall; however, the weight assigned to these signals varies across intersecting demographic subgroups, revealing latent implicit biases in specific subgroup interactions.
📝 Abstract
General-purpose Large Language Models (LLMs) show significant potential in recruitment applications, where decisions require reasoning over unstructured text, balancing multiple criteria, and inferring fit and competence from indirect productivity signals. Yet, it is still uncertain how LLMs assign importance to each attribute and whether such assignments are in line with economic principles, recruiter preferences or broader societal norms. We propose a framework to evaluate an LLM's decision logic in recruitment, by drawing on established economic methodologies for analyzing human hiring behavior. We build synthetic datasets from real freelancer profiles and project descriptions from a major European online freelance marketplace and apply a full factorial design to estimate how a LLM weighs different match-relevant criteria when evaluating freelancer-project fit. We identify which attributes the LLM prioritizes and analyze how these weights vary across project contexts and demographic subgroups. Finally, we explain how a comparable experimental setup could be implemented with human recruiters to assess alignment between model and human decisions. Our findings reveal that the LLM weighs core productivity signals, such as skills and experience, but interprets certain features beyond their explicit matching value. While showing minimal average discrimination against minority groups, intersectional effects reveal that productivity signals carry different weights between demographic groups.