Measuring Reasoning in LLMs: a New Dialectical Angle

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reasoning evaluation methods overemphasize answer correctness while neglecting the dynamic, dialectical nature and epistemic depth of reasoning processes. Method: We propose SIEV (Synthesis of Ideas through Evaluation and Validation), a structured assessment framework grounded in philosophical dialectics, which models reasoning as a three-stage process—thesis, antithesis, and synthesis—to evaluate idea interaction, conflict resolution, and higher-order conceptual integration. SIEV enables fine-grained, process-oriented evaluation beyond final outputs and is integrated with established benchmarks (e.g., GSM, MMLU). Contribution/Results: Experiments reveal that even state-of-the-art models exhibiting saturation on conventional benchmarks—such as GPT-5-chat—suffer >40-point drops in SIEV scores, exposing critical deficiencies in deep reasoning. This work pioneers the systematic application of dialectical methodology to LLM reasoning evaluation, establishing a novel paradigm for identifying “correct yet unreliable” reasoning—where superficial accuracy masks flawed inferential mechanisms.

Technology Category

Application Category

📝 Abstract
What does it truly mean for a language model to "reason"? Most current evaluations and benchmarks reward models' correct standalone answers--but correctness alone reveals little about the process that produced them. In this work, we explore a different perspective: reasoning is not a static chain of steps, but a dynamic trajectory where ideas interact, clash, and evolve into deeper insights. To capture this dynamic, we draw on a well-established philosophical tradition: extit{dialectics}, where reasoning unfolds through thesis, antithesis, and synthesis. Building on this, we present SIEV, a structured framework that evaluates reasoning of LLMs through dialectics. Unlike conventional evaluations, SIEV assesses not only the conclusion a model reaches, but how it gets there: its ability to resolve tension, integrate distinct ideas, and synthesize higher-order reasoning. This lens uncovers significant reasoning gaps in state-of-the-art models even under saturated benchmarks like GSM and MMLU. For instance, GPT-5-chat, a recent model, loses over 40 points (out of 100) when evaluated with SIEV on GSM. Our findings highlight that adopting a process-oriented, philosophically grounded approach enables a deeper, more rigorous, and more discriminative assessment of LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning in LLMs beyond correctness
Assessing dynamic idea interaction and synthesis
Identifying reasoning gaps in state-of-the-art models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLM reasoning through dialectical framework
Assesses reasoning process beyond final answer correctness
Uses thesis-antithesis-synthesis for dynamic evaluation