🤖 AI Summary
Existing safety benchmarks lack systematic evaluation of large language models’ (LLMs) vulnerabilities in socio-political risk scenarios—such as political manipulation, disinformation generation, surveillance, and information control. To address this gap, we introduce SocialHarmBench: the first benchmark covering 7 socio-political domains across 34 countries, comprising 585 prompts spanning the 20th–21st centuries. It enables the first cross-regional, diachronic assessment of LLMs’ socio-political safety. Our methodology employs multi-dimensional prompt design and hybrid human-automated evaluation. Results reveal that mainstream open-weight models (e.g., Mistral-7B) exhibit attack success rates exceeding 97% on tasks involving historical revisionism and propaganda manipulation; notably, defenses are weakest against queries related to Latin America, the U.S., and the U.K., exposing significant geographic and historical contextual biases. SocialHarmBench fills a critical gap in socio-political safety evaluation and provides a reproducible, empirically grounded benchmark to inform governance of LLMs in high-risk societal contexts.
📝 Abstract
Large language models (LLMs) are increasingly deployed in contexts where their failures can have direct sociopolitical consequences. Yet, existing safety benchmarks rarely test vulnerabilities in domains such as political manipulation, propaganda and disinformation generation, or surveillance and information control. We introduce SocialHarmBench, a dataset of 585 prompts spanning 7 sociopolitical categories and 34 countries, designed to surface where LLMs most acutely fail in politically charged contexts. Our evaluations reveal several shortcomings: open-weight models exhibit high vulnerability to harmful compliance, with Mistral-7B reaching attack success rates as high as 97% to 98% in domains such as historical revisionism, propaganda, and political manipulation. Moreover, temporal and geographic analyses show that LLMs are most fragile when confronted with 21st-century or pre-20th-century contexts, and when responding to prompts tied to regions such as Latin America, the USA, and the UK. These findings demonstrate that current safeguards fail to generalize to high-stakes sociopolitical settings, exposing systematic biases and raising concerns about the reliability of LLMs in preserving human rights and democratic values. We share the SocialHarmBench benchmark at https://huggingface.co/datasets/psyonp/SocialHarmBench.