Balancing Faithfulness and Performance in Reasoning via Multi-Listener Soft Execution

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misalignment between conventional chain-of-thought (CoT) reasoning and the actual computational processes of large language models, as well as the common trade-off between interpretability and task performance. To bridge this gap, the authors propose REMUL, a method that employs a multi-listener reinforcement learning framework to guide the speaker in generating reasoning traces executable by multiple listeners. Combined with mask-supervised fine-tuning, REMUL jointly optimizes for both faithfulness and accuracy by explicitly formulating reasoning faithfulness as an optimizable objective. Evaluated on benchmarks such as BIG-Bench Hard, the approach simultaneously improves task accuracy and three faithfulness metrics—including hint attribution and area over curve (AOC)—yielding CoT rationales that are shorter, more direct, and more reliable.

Technology Category

Application Category

📝 Abstract
Chain-of-thought (CoT) reasoning sometimes fails to faithfully reflect the true computation of a large language model (LLM), hampering its utility in explaining how LLMs arrive at their answers. Moreover, optimizing for faithfulness and interpretability in reasoning often degrades task performance. To address this tradeoff and improve CoT faithfulness, we propose Reasoning Execution by Multiple Listeners (REMUL), a multi-party reinforcement learning approach. REMUL builds on the hypothesis that reasoning traces which other parties can follow will be more faithful. A speaker model generates a reasoning trace, which is truncated and passed to a pool of listener models who "execute" the trace, continuing the trace to an answer. Speakers are rewarded for producing reasoning that is clear to listeners, with additional correctness regularization via masked supervised finetuning to counter the tradeoff between faithfulness and performance. On multiple reasoning benchmarks (BIG-Bench Extra Hard, MuSR, ZebraLogicBench, and FOLIO), REMUL consistently and substantially improves three measures of faithfulness -- hint attribution, early answering area over the curve (AOC), and mistake injection AOC -- while also improving accuracy. Our analysis finds that these gains are robust across training domains, translate to legibility gains, and are associated with shorter and more direct CoTs.
Problem

Research questions and friction points this paper is trying to address.

faithfulness
chain-of-thought reasoning
large language models
interpretability
performance tradeoff
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought
Faithfulness
Multi-Listener Reinforcement Learning
Reasoning Execution
Interpretability
🔎 Similar Papers
No similar papers found.