Measuring Reasoning Trace Legibility: Can Those Who Understand Teach?

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of effective evaluation for the readability—i.e., usability by other models or humans—of reasoning traces generated by current reasoning language models. It proposes “transfer utility” as a novel metric for readability, quantified by how effectively 90,000 reasoning traces produced by 12 models aid weaker models in solving tasks. The study reveals an inverse relationship between model performance and trace readability, uncovering a tension between reasoning efficiency and readability. Furthermore, it characterizes the Pareto frontier of readability, demonstrating that prevailing reward-model training paradigms neglect this dimension. The findings highlight that readability is highly task- and audience-dependent, offering new insights for designing reasoning scaffolds in multi-agent collaborative systems.

Technology Category

Application Category

📝 Abstract
Language models are increasingly being trained to "reason" before answering users' queries, outputting hundreds or even thousands of tokens worth of deliberation before their final answer. While the main intention of reasoning is to improve models' ability to arrive at a correct answer, we argue that these models should be assessed for the legibility of their reasoning traces in addition to the correctness of their final answers. In this paper, we evaluate 90k traces from 12 Reasoning Language Models (RLMs) for the quality of their reasoning traces. We introduce the concept of transfer utility, which assesses how useful an RLM's reasoning traces are for guiding a weaker, non-reasoning model toward arriving at the correct answer. We find that the reasoning traces of the highest-performing models rank among the lowest for legibility. Furthermore, we uncover tensions between efficiency-based measurements of legibility (such as trace length) and transfer utility. These tensions establish a legibility Pareto frontier, and we demonstrate that an RLM's ability to output highly legible traces can be a task- and audience-dependent goal. Crucially, we find that reward models used to train RLMs do not intrinsically reward legibility. Together, these metrics and the findings they surface chart a path towards scaffolding reasoning traces for a multi-agent future.
Problem

Research questions and friction points this paper is trying to address.

reasoning trace legibility
transfer utility
reasoning language models
multi-agent reasoning
reward modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning trace legibility
transfer utility
reasoning language models
Pareto frontier
reward modeling
🔎 Similar Papers
No similar papers found.
D
Dani Roytburg
Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States
S
Shreya Sridhar
Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States
Daphne Ippolito
Daphne Ippolito
Carnegie Mellon University
natural language processing