I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study exposes implicit linguistic biases in large language models (LLMs) during hiring evaluations—specifically, discrimination against linguistic markers such as hesitations, dialectal variants, and socio-regional expressions. To address this, we propose a controllable linguistic variation framework and introduce the first language-sensitive benchmark for recruitment: a set of 100 semantically equivalent yet formally diverse interview Q&A pairs, constructed via linguistics-informed controlled experiments that decouple surface form from semantic content. Integrating automated scoring with multidimensional bias analysis, we systematically quantify discrimination effects for the first time—responses containing hesitation markers score 25.6% lower on average—and identify consistent cross-model biases along gender, socioeconomic status, and regional dimensions. Our work extends the scope of AI fairness evaluation and establishes a novel paradigm for detecting and mitigating linguistic bias in LLMs.

Technology Category

Application Category

📝 Abstract
This paper introduces a comprehensive benchmark for evaluating how Large Language Models (LLMs) respond to linguistic shibboleths: subtle linguistic markers that can inadvertently reveal demographic attributes such as gender, social class, or regional background. Through carefully constructed interview simulations using 100 validated question-response pairs, we demonstrate how LLMs systematically penalize certain linguistic patterns, particularly hedging language, despite equivalent content quality. Our benchmark generates controlled linguistic variations that isolate specific phenomena while maintaining semantic equivalence, which enables the precise measurement of demographic bias in automated evaluation systems. We validate our approach along multiple linguistic dimensions, showing that hedged responses receive 25.6% lower ratings on average, and demonstrate the benchmark's effectiveness in identifying model-specific biases. This work establishes a foundational framework for detecting and measuring linguistic discrimination in AI systems, with broad applications to fairness in automated decision-making contexts.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM responses to linguistic shibboleths in hiring
Measuring demographic bias in automated evaluation systems
Detecting linguistic discrimination in AI decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for LLM linguistic shibboleth detection
Controlled linguistic variations isolate bias
Measures demographic bias in automated evaluations
🔎 Similar Papers
No similar papers found.