🤖 AI Summary
This work addresses the risk that large language models may implicitly accumulate and propagate biases across domains through long-term memory mechanisms, raising significant fairness concerns. To systematically investigate this issue, the authors introduce the Decision-based Implicit Bias (DIB) benchmark and a long-horizon interactive simulation framework to evaluate bias dynamics in mainstream models and memory architectures. Their analysis reveals, for the first time, that implicit bias exhibits non-stationary growth within long-term memory and can transfer across domains. To mitigate this, they propose Dynamic Memory Tagging (DMT), a mechanism that enforces fairness constraints during the memory writing phase. Experimental results demonstrate that DMT effectively suppresses bias accumulation and blocks cross-domain propagation, achieving more persistent and robust debiasing performance compared to static prompting strategies.
📝 Abstract
Long-term memory mechanisms enable Large Language Models (LLMs) to maintain continuity and personalization across extended interaction lifecycles, but they also introduce new and underexplored risks related to fairness. In this work, we study how implicit bias, defined as subtle statistical prejudice, accumulates and propagates within LLMs equipped with long-term memory. To support systematic analysis, we introduce the Decision-based Implicit Bias (DIB) Benchmark, a large-scale dataset comprising 3,776 decision-making scenarios across nine social domains, designed to quantify implicit bias in long-term decision processes. Using a realistic long-horizon simulation framework, we evaluate six state-of-the-art LLMs integrated with three representative memory architectures on DIB and demonstrate that LLMs'implicit bias does not remain static but intensifies over time and propagates across unrelated domains. We further analyze mitigation strategies and show that a static system-level prompting baseline provides limited and short-lived debiasing effects. To address this limitation, we propose Dynamic Memory Tagging (DMT), an agentic intervention that enforces fairness constraints at memory write time. Extensive experimental results show that DMT substantially reduces bias accumulation and effectively curtails cross-domain bias propagation.