What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates censorship behaviors of large language models (LLMs) in responses to politically sensitive queries about world leaders. It introduces, for the first time, a dichotomous framework distinguishing “hard censorship” (e.g., refusal to answer, error messages) from “soft censorship” (e.g., omission or dilution of critical information). Using a standardized prompt set covering six UN official languages and 14 mainstream LLMs, the authors conduct cross-model, cross-lingual, and cross-national comparative experiments, combining large-scale human annotation with automated classification. Results reveal pervasive censorship across all models, yet strategies are highly localized—strongly aligned with the providers’ domestic sociopolitical contexts rather than exhibiting global consistency. The study releases all data, prompts, annotations, and code, establishing the first reproducible, multilingual benchmark for assessing AI transparency and informing context-sensitive, differentiated governance frameworks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed as gateways to information, yet their content moderation practices remain underexplored. This work investigates the extent to which LLMs refuse to answer or omit information when prompted on political topics. To do so, we distinguish between hard censorship (i.e., generated refusals, error messages, or canned denial responses) and soft censorship (i.e., selective omission or downplaying of key elements), which we identify in LLMs' responses when asked to provide information on a broad range of political figures. Our analysis covers 14 state-of-the-art models from Western countries, China, and Russia, prompted in all six official United Nations (UN) languages. Our analysis suggests that although censorship is observed across the board, it is predominantly tailored to an LLM provider's domestic audience and typically manifests as either hard censorship or soft censorship (though rarely both concurrently). These findings underscore the need for ideological and geographic diversity among publicly available LLMs, and greater transparency in LLM moderation strategies to facilitate informed user choices. All data are made freely available.
Problem

Research questions and friction points this paper is trying to address.

Study LLM censorship practices on political topics
Compare hard and soft censorship in LLM responses
Analyze ideological bias in 14 global LLM models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distinguish hard and soft censorship in LLMs
Analyze 14 models across multiple countries
Promote transparency in moderation strategies
🔎 Similar Papers
No similar papers found.