🤖 AI Summary
This study addresses the risk that large language models (LLMs) may propagate historical revisionist narratives when responding to historical queries, due to the absence of reliable auditing mechanisms. To this end, the authors construct HistoricalMisinfo, a benchmark dataset encompassing 500 contested historical events across 45 countries, each annotated with both factual and revisionist narrative versions. They design 11 realistic prompting scenarios to evaluate model outputs under neutral and诱导 conditions. An innovative LLM-as-a-judge automated evaluation protocol is introduced, integrating structured prompts, reference-aligned scoring, and multi-scenario templates to quantitatively assess models’ sensitivity and robustness to historical revisionism. Experiments reveal that while models generally favor factual responses under neutral prompts, they significantly deviate toward revisionist content when explicitly prompted, exposing a critical lack of effective resistance or self-correction capabilities.
📝 Abstract
Large language models (LLMs) are increasingly used as sources of historical information, motivating the need for scalable audits on contested events and politically charged narratives in settings that mirror real user interactions. We introduce \textsc{\texttt{HistoricalMisinfo}}, a curated dataset of $500$ contested events from $45$ countries, each paired with a factual reference narrative and a documented revisionist reference narrative. To approximate real-world usage, we instantiate each event in $11$ prompt scenarios that reflect common communication settings (e.g., questions, textbooks, social posts, policy briefs). Using an LLM-as-a-judge protocol that compares model outputs to the two references, we evaluate LLMs varying across model architectures in two conditions: (i) neutral user prompts that ask for factually accurate information, and (ii) robustness prompts in which the user explicitly requests the revisionist version of the event. Under neutral prompts, models are generally closer to factual references, though the resulting scores should be interpreted as reference-alignment signals rather than definitive evidence of human-interpretable revisionism. Robustness prompting yields a strong and consistent effect: when the user requests the revisionist narrative, all evaluated models show sharply higher revisionism scores, indicating limited resistance or self-correction. \textsc{\texttt{HistoricalMisinfo}} provides a practical foundation for benchmarking robustness to revisionist framing and for guiding future work on more precise automatic evaluation of contested historical claims to ensure a sustainable integration of AI systems within society.