ReTraceQA: Evaluating Reasoning Traces of Small Language Models in Commonsense Question Answering

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing commonsense reasoning evaluation relies excessively on final-answer accuracy, neglecting the quality of the reasoning process—leading to substantial overestimation (14–24%) of small language models’ (SLMs) reasoning capabilities, as many correct answers stem from flawed reasoning. Method: We propose ReTraceQA, the first process-level reasoning quality benchmark tailored for SLMs. It comprises an expert-annotated reasoning trace dataset and leverages strong large language models as automated judges to perform fine-grained validity assessment of reasoning chains. Contribution/Results: Extensive experiments across multiple SLMs and datasets demonstrate that integrating reasoning-aware evaluation reduces SLM performance by up to 25% on average, uncovering previously masked reasoning deficiencies. This work exposes critical limitations of conventional answer-centric evaluation paradigms and establishes a new, scalable standard for trustworthy commonsense reasoning assessment.

Technology Category

Application Category

📝 Abstract
While Small Language Models (SLMs) have demonstrated promising performance on an increasingly wide array of commonsense reasoning benchmarks, current evaluation practices rely almost exclusively on the accuracy of their final answers, neglecting the validity of the reasoning processes that lead to those answers. To address this issue, we introduce ReTraceQA, a novel benchmark that introduces process-level evaluation for commonsense reasoning tasks. Our expert-annotated dataset reveals that in a substantial portion of instances (14-24%), SLMs provide correct final answers despite flawed reasoning processes, suggesting that the capabilities of SLMs are often overestimated by evaluation metrics that focus only on comparing the final answer with the ground truth. Indeed, we show that when employing strong Large Language Models (LLMs) as automated judges for reasoning-aware evaluation rather than answer-only metrics, SLM performance drops significantly across all models and datasets, with scores decreasing by up to 25%.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reasoning traces of small language models in commonsense question answering
Addressing overestimation of SLM capabilities through process-level evaluation
Developing benchmark to assess reasoning validity beyond final answer accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReTraceQA benchmark for process-level evaluation
Expert-annotated dataset revealing flawed reasoning processes
Using LLMs as automated judges for reasoning assessment
🔎 Similar Papers
No similar papers found.