🤖 AI Summary
This work identifies the amplification of latent social biases in large language models (LLMs) under user-profile-aware scenarios. Addressing the limitation of conventional benchmarks (e.g., MMLU) in capturing real-world bias, we design a multi-stage evaluation: MMLU-style assessment under predefined personas, user-response scoring, and salary negotiation recommendation generation. Results show statistically insignificant bias (p > 0.05) during passive response generation, but significant gender- and region-based bias amplification (p < 0.01) during active judgment or prescriptive advice provision. We introduce the “known-user-profile” paradigm—highlighting how memory-augmented personalization renders biases more implicit and context-dependent. Our findings reveal that current fairness evaluations substantially underestimate LLM bias risks in authentic human-AI interactions. We thus advocate for dynamic, behavior-intention- and socially grounded evaluation frameworks that assess downstream societal impact rather than static task performance.
📝 Abstract
Modern language models are trained on large amounts of data. These data inevitably include controversial and stereotypical content, which contains all sorts of biases related to gender, origin, age, etc. As a result, the models express biased points of view or produce different results based on the assigned personality or the personality of the user. In this paper, we investigate various proxy measures of bias in large language models (LLMs). We find that evaluating models with pre-prompted personae on a multi-subject benchmark (MMLU) leads to negligible and mostly random differences in scores. However, if we reformulate the task and ask a model to grade the user's answer, this shows more significant signs of bias. Finally, if we ask the model for salary negotiation advice, we see pronounced bias in the answers. With the recent trend for LLM assistant memory and personalization, these problems open up from a different angle: modern LLM users do not need to pre-prompt the description of their persona since the model already knows their socio-demographics.