🤖 AI Summary
Existing recruitment recommendation algorithms predominantly address fairness from a unidirectional perspective—typically that of job seekers—while neglecting the distinct fairness concerns of employers and platform operators. To address this gap, we conducted semi-structured interviews with 40 stakeholders across diverse groups, followed by qualitative thematic analysis, to systematically identify multi-stakeholder unfairness patterns. Based on these findings, we propose the first tripartite fairness definition encompassing job seekers, employers, and platforms, and develop an actionable fairness mapping framework. Crucially, we integrate this multi-stakeholder fairness model into candidate recommendation systems—moving beyond conventional unidirectional fairness metrics. Our fairness indicators are empirically grounded and algorithmically implementable, offering both theoretical foundations and practical guidance for designing more inclusive, equitable recruitment recommendation systems.
📝 Abstract
Already before the enactment of the EU AI Act, candidate or job recommendation for algorithmic hiring -- semi-automatically matching CVs to job postings -- was used as an example of a high-risk application where unfair treatment could result in serious harms to job seekers. Recommending candidates to jobs or jobs to candidates, however, is also a fitting example of a multi-stakeholder recommendation problem. In such multi-stakeholder systems, the end user is not the only party whose interests should be considered when generating recommendations. In addition to job seekers, other stakeholders -- such as recruiters, organizations behind the job postings, and the recruitment agency itself -- are also stakeholders in this and deserve to have their perspectives included in the design of relevant fairness metrics. Nevertheless, past analyses of fairness in algorithmic hiring have been restricted to single-side fairness, ignoring the perspectives of the other stakeholders. In this paper, we address this gap and present a multi-stakeholder approach to fairness in a candidate recommender system that recommends relevant candidate CVs to human recruiters in a human-in-the-loop algorithmic hiring scenario. We conducted semi-structured interviews with 40 different stakeholders (job seekers, companies, recruiters, and other job portal employees). We used these interviews to explore their lived experiences of unfairness in hiring, co-design definitions of fairness as well as metrics that might capture these experiences. Finally, we attempt to reconcile and map these different (and sometimes conflicting) perspectives and definitions to existing (categories of) fairness metrics that are relevant for our candidate recommendation scenario.