RULEBREAKERS: Challenging LLMs at the Crossroads between Formal Logic and Human-like Reasoning

📅 2024-10-21
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a “dehumanized reasoning” problem in large language models (LLMs), wherein overreliance on formal logic leads to outputs violating human commonsense under logic–commonsense conflicts. To address this, we introduce the first benchmark dataset targeting the “rulebreaker” phenomenon—defined here as a systematic cognitive bias—and propose a novel evaluation paradigm that jointly enforces logical rigor and human-aligned plausibility. Methodologically, we design adversarial logic–commonsense conflict instances grounded in cognitive science principles, and perform multi-dimensional diagnostics via human annotation, attention distribution analysis, and world-knowledge utilization assessment. Experiments across seven state-of-the-art LLMs—including GPT-4o (achieving only moderate accuracy)—demonstrate the ubiquity of this flaw; we pinpoint imbalanced attention allocation and insufficient world-knowledge integration as primary causal factors. This work establishes a new benchmark and cognitive calibration framework for trustworthy AI reasoning.

Technology Category

Application Category

📝 Abstract
Formal logic enables computers to reason in natural language by representing sentences in symbolic forms and applying rules to derive conclusions. However, in what our study characterizes as"rulebreaker"scenarios, this method can lead to conclusions that are typically not inferred or accepted by humans given their common sense and factual knowledge. Inspired by works in cognitive science, we create RULEBREAKERS, the first dataset for rigorously evaluating the ability of large language models (LLMs) to recognize and respond to rulebreakers (versus non-rulebreakers) in a human-like manner. Evaluating seven LLMs, we find that most models, including GPT-4o, achieve mediocre accuracy on RULEBREAKERS and exhibit some tendency to over-rigidly apply logical rules unlike what is expected from typical human reasoners. Further analysis suggests that this apparent failure is potentially associated with the models' poor utilization of their world knowledge and their attention distribution patterns. Whilst revealing a limitation of current LLMs, our study also provides a timely counterbalance to a growing body of recent works that propose methods relying on formal logic to improve LLMs' general reasoning capabilities, highlighting their risk of further increasing divergence between LLMs and human-like reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to handle rulebreaker scenarios
Assessing divergence between logical and human-like reasoning
Identifying limitations in LLMs' world knowledge utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created RULEBREAKERS dataset for LLM evaluation
Assessed LLMs' human-like reasoning with rulebreakers
Analyzed attention patterns affecting logical reasoning
🔎 Similar Papers
No similar papers found.
J
Jason Chan
The University of Sheffield
Robert Gaizauskas
Robert Gaizauskas
Professor of Computer Science, University of Sheffield
Natural Language ProcessingComputational Linguistics
Z
Zhixue Zhao
The University of Sheffield