🤖 AI Summary
This work addresses the emergence of collective memory biases—such as the Mandela Effect—in multi-agent large language model systems, which can lead to shared false memories and the propagation of misinformation. The study presents the first formal definition and quantification of this phenomenon within multi-agent settings and introduces MANBENCH, a comprehensive benchmark comprising four task categories and five interaction protocols to systematically evaluate its prevalence and impact. To mitigate these biases, the authors propose an integrated defense mechanism that combines cognitive anchoring, source scrutiny, and alignment optimization, operating synergistically at both the prompting and model layers. Experimental results demonstrate that the proposed approach reduces instances of the Mandela Effect by 74.40% on average, substantially enhancing the reliability of collective memory and the ethical consistency of multi-agent systems.
📝 Abstract
Recent advancements in large language models (LLMs) have significantly enhanced the capabilities of collaborative multi-agent systems, enabling them to address complex challenges. However, within these multi-agent systems, the susceptibility of agents to collective cognitive biases remains an underexplored issue. A compelling example is the Mandela effect, a phenomenon where groups collectively misremember past events as a result of false details reinforced through social influence and internalized misinformation. This vulnerability limits our understanding of memory bias in multi-agent systems and raises ethical concerns about the potential spread of misinformation. In this paper, we conduct a comprehensive study on the Mandela effect in LLM-based multi-agent systems, focusing on its existence, causing factors, and mitigation strategies. We propose MANBENCH, a novel benchmark designed to evaluate agent behaviors across four common task types that are susceptible to the Mandela effect, using five interaction protocols that vary in agent roles and memory timescales. We evaluate agents powered by several LLMs on MANBENCH to quantify the Mandela effect and analyze how different factors affect it. Moreover, we propose strategies to mitigate this effect, including prompt-level defenses (e.g., cognitive anchoring and source scrutiny) and model-level alignment-based defense, achieving an average 74.40% reduction in the Mandela effect compared to the baseline. Our findings provide valuable insights for developing more resilient and ethically aligned collaborative multi-agent systems.