Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of large language models to overlook embedded commonsense contradictions in moral reasoning. To this end, the authors introduce CoMoral, the first benchmark that integrates moral dilemmas with explicit commonsense conflicts, enabling a systematic evaluation of commonsense awareness across ten prominent models through carefully crafted scenarios. The study identifies and formally defines a novel phenomenon termed “narrative focus bias”: models are more likely to detect commonsense violations committed by secondary characters than those made by the narrator themselves. Experimental results demonstrate that current models generally fail to recognize commonsense inconsistencies within moral contexts without explicit prompting, and their performance is significantly influenced by the narrative perspective. These findings underscore the urgent need to enhance the robustness of commonsense reasoning in language models operating in ethically sensitive domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these models remain both morally grounded and knowledge-aware. In this work, we uncover a critical limitation of current LLMs -- their tendency to prioritize moral reasoning over commonsense understanding. To investigate this phenomenon, we introduce CoMoral, a novel benchmark dataset containing commonsense contradictions embedded within moral dilemmas. Through extensive evaluation of ten LLMs across different model sizes, we find that existing models consistently struggle to identify such contradictions without prior signal. Furthermore, we observe a pervasive narrative focus bias, wherein LLMs more readily detect commonsense contradictions when they are attributed to a secondary character rather than the primary (narrator) character. Our comprehensive analysis underscores the need for enhanced reasoning-aware training to improve the commonsense robustness of large language models.
Problem

Research questions and friction points this paper is trying to address.

commonsense reasoning
moral reasoning
narrative focus bias
large language models
commonsense contradictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

narrative focus bias
commonsense reasoning
moral dilemmas
CoMoral benchmark
reasoning-aware training
🔎 Similar Papers
No similar papers found.
S
Saugata Purkayastha
Department of Language Science and Technology, Universität des Saarlandes
P
Pranav Kushare
Department of Language Science and Technology, Universität des Saarlandes
P
Pragya Paramita Pal
Department of Language Science and Technology, Universität des Saarlandes
Sukannya Purkayastha
Sukannya Purkayastha
Technische Universität Darmstadt
Natural Language ProcessingDeep LearningMachine Learning