π€ AI Summary
Existing large language models (LLMs) often inherit implicit social biases from training data, yet lack interpretable, statistically verifiable quantitative benchmarks for measuring such biases.
Method: We propose the Demographic Implicit Fairness (DIF) framework, which controllably injects demographic roles into logic and mathematics problems and quantifies model sensitivity to irrelevant social information via response consistency analysis and statistical significance testing.
Contribution/Results: We introduce DIFβa novel, interpretable, and statistically testable metric for implicit bias. We empirically demonstrate a significant negative correlation between DIF scores and question-answering accuracy, revealing an inherent trade-off between fairness and robustness. Evaluations across multiple state-of-the-art LLMs show that DIF scores consistently differentiate bias levels and strongly align with human assessments (Spearmanβs Ο > 0.85), validating its reliability and practical utility.
π Abstract
As Large Language Models (LLMs) have risen in prominence over the past few years, there has been concern over the potential biases in LLMs inherited from the training data. Previous studies have examined how LLMs exhibit implicit bias, such as when response generation changes when different social contexts are introduced. We argue that this implicit bias is not only an ethical, but also a technical issue, as it reveals an inability of LLMs to accommodate extraneous information. However, unlike other measures of LLM intelligence, there are no standard methods to benchmark this specific subset of LLM bias. To bridge this gap, we developed a method for calculating an easily interpretable benchmark, DIF (Demographic Implicit Fairness), by evaluating preexisting LLM logic and math problem datasets with sociodemographic personas. We demonstrate that this method can statistically validate the presence of implicit bias in LLM behavior and find an inverse trend between question answering accuracy and implicit bias, supporting our argument.