Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) capacity to model distributive justice principles—such as equality, envy-freeness, and Rawlsian maximin—and their alignment with human fairness preferences in resource allocation. We propose a multidimensional fairness-aware evaluation framework integrating controlled prompt engineering, a menu-based choice paradigm, and a human-annotated dataset for cross-model benchmarking. Our findings reveal that generative allocations produced by LLMs significantly deviate from human fairness intuitions; introducing menu-based choices improves fairness judgment accuracy by up to 47% for certain models; semantic and non-semantic prompts exhibit notable fragility in fairness reasoning; and monetary transfers fail to effectively mitigate inequality under current LLM capabilities. Based on these insights, we propose prompt design guidelines and fine-tuning strategies to enhance fairness alignment, offering both theoretical foundations and practical pathways toward building fairness-aware AI systems.

Technology Category

Application Category

📝 Abstract
The growing interest in employing large language models (LLMs) for decision-making in social and economic contexts has raised questions about their potential to function as agents in these domains. A significant number of societal problems involve the distribution of resources, where fairness, along with economic efficiency, play a critical role in the desirability of outcomes. In this paper, we examine whether LLM responses adhere to fundamental fairness concepts such as equitability, envy-freeness, and Rawlsian maximin, and investigate their alignment with human preferences. We evaluate the performance of several LLMs, providing a comparative benchmark of their ability to reflect these measures. Our results demonstrate a lack of alignment between current LLM responses and human distributional preferences. Moreover, LLMs are unable to utilize money as a transferable resource to mitigate inequality. Nonetheless, we demonstrate a stark contrast when (some) LLMs are tasked with selecting from a predefined menu of options rather than generating one. In addition, we analyze the robustness of LLM responses to variations in semantic factors (e.g. intentions or personas) or non-semantic prompting changes (e.g. templates or orderings). Finally, we highlight potential strategies aimed at enhancing the alignment of LLM behavior with well-established fairness concepts.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Social Equity
Fairness Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Social Equity Evaluation
Enhanced Fairness in Decision-making
🔎 Similar Papers
No similar papers found.