MORABLES: A Benchmark for Assessing Abstract Moral Reasoning in LLMs with Fables

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately evaluate large language models’ (LLMs) capacity for complex, abstract moral reasoning. Method: We construct a human-validated multiple-choice benchmark grounded in fables and historical literary short stories, incorporating adversarial variants and carefully crafted distractors, coupled with a standardized evaluation protocol to systematically assess deep comprehension, reasoning stability, and robustness. Contribution/Results: This work introduces the first verifiable evaluation framework specifically designed for moral abstraction reasoning. It reveals pervasive fragility and self-contradiction in LLMs under adversarial conditions—up to 20% inconsistency—and demonstrates that performance gains stem primarily from scaling model size rather than improvements in reasoning mechanisms. Empirical results indicate that current LLMs lack reliable higher-order moral reasoning capabilities.

Technology Category

Application Category

📝 Abstract
As LLMs excel on standard reading comprehension benchmarks, attention is shifting toward evaluating their capacity for complex abstract reasoning and inference. Literature-based benchmarks, with their rich narrative and moral depth, provide a compelling framework for evaluating such deeper comprehension skills. Here, we present MORABLES, a human-verified benchmark built from fables and short stories drawn from historical literature. The main task is structured as multiple-choice questions targeting moral inference, with carefully crafted distractors that challenge models to go beyond shallow, extractive question answering. To further stress-test model robustness, we introduce adversarial variants designed to surface LLM vulnerabilities and shortcuts due to issues such as data contamination. Our findings show that, while larger models outperform smaller ones, they remain susceptible to adversarial manipulation and often rely on superficial patterns rather than true moral reasoning. This brittleness results in significant self-contradiction, with the best models refuting their own answers in roughly 20% of cases depending on the framing of the moral choice. Interestingly, reasoning-enhanced models fail to bridge this gap, suggesting that scale - not reasoning ability - is the primary driver of performance.
Problem

Research questions and friction points this paper is trying to address.

Assessing abstract moral reasoning in LLMs
Evaluating model robustness against adversarial manipulation
Testing moral inference beyond shallow question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-verified benchmark using fables
Multiple-choice questions with distractors
Adversarial variants to test robustness
🔎 Similar Papers
2024-06-06arXiv.orgCitations: 13