Investigating the Robustness of Deductive Reasoning with Large Language Models

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the robustness of large language models (LLMs) in deductive reasoning, focusing on vulnerabilities in informal-text formalization and inference processes. Method: We propose the first dedicated evaluation framework for LLM-based deductive reasoning robustness, constructing a perturbation dataset comprising seven categories of adversarial noise and counterfactual statements; we decouple impacts along three dimensions—formalization syntax, inference format, and error-recovery feedback. Contribution/Results: Counterfactual perturbations universally degrade performance across all methods, whereas adversarial noise primarily impairs automatic formalization. Detailed feedback mitigates syntactic errors but fails to improve overall reasoning accuracy. These findings expose critical bottlenecks in current LLM-based deductive reasoning—particularly in handling counterfactuals and recovering from formalization failures—and provide empirical grounding and methodological guidance for developing robust, trustworthy reasoning systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been shown to achieve impressive results for many reasoning-based Natural Language Processing (NLP) tasks, suggesting a degree of deductive reasoning capability. However, it remains unclear to which extent LLMs, in both informal and autoformalisation methods, are robust on logical deduction tasks. Moreover, while many LLM-based deduction methods have been proposed, there is a lack of a systematic study that analyses the impact of their design components. Addressing these two challenges, we propose the first study of the robustness of LLM-based deductive reasoning methods. We devise a framework with two families of perturbations: adversarial noise and counterfactual statements, which jointly generate seven perturbed datasets. We organize the landscape of LLM reasoners according to their reasoning format, formalisation syntax, and feedback for error recovery. The results show that adversarial noise affects autoformalisation, while counterfactual statements influence all approaches. Detailed feedback does not improve overall accuracy despite reducing syntax errors, pointing to the challenge of LLM-based methods to self-correct effectively.
Problem

Research questions and friction points this paper is trying to address.

Assess robustness of LLMs in deductive reasoning
Analyze impact of design components in LLM methods
Evaluate effectiveness of feedback for error correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based deductive reasoning robustness
Adversarial noise and counterfactual perturbations
Systematic study of LLM design components
🔎 Similar Papers
No similar papers found.