Local Coherence or Global Validity? Investigating RLVR Traces in Math Domains

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RL-based verification and reasoning (RLVR) methods for post-training large language models (LLMs) on mathematical reasoning rely solely on terminal rewards (e.g., Pass@K), neglecting fine-grained optimization of intermediate tokens—despite claiming improvements in chain-of-thought (CoT) quality. Method: This work investigates the impact of RL post-training on intermediate reasoning steps, proposing a first-order logic (FOL)-based “CoT consistency” evaluation framework that disentangles local correctness from global logical validity. We conduct experiments using the GRPO algorithm on Qwen-2.5-0.5B, integrating FOL formal verification to systematically analyze reasoning trajectories on GSM8K. Contribution/Results: RL post-training significantly improves local consistency within failed CoTs but does not ensure global correctness or full proof validity. These findings expose a critical limitation in current RLVR approaches—namely, their inability to guarantee end-to-end logical soundness—and provide both theoretical grounding and empirical evidence for designing fine-grained, stepwise reward signals.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR)-based post-training of Large Language Models (LLMs) has been shown to improve accuracy on reasoning tasks and continues to attract significant attention. Existing RLVR methods, however, typically treat all tokens uniformly without accounting for token-level advantages. These methods primarily evaluate performance based on final answer correctness or Pass@K accuracy, and yet make claims about RL post-training leading to improved reasoning traces. This motivates our investigation into the effect of RL post-training on intermediate tokens which are not directly incentivized. To study this, we design an experimental setup using the GRPO algorithm with Qwen-2.5-0.5B model on the GSM8K dataset. We introduce trace coherence, a First-Order Logic (FOL)-based measure to capture the consistency of reasoning steps by identifying errors in the traces. We distinguish between trace validity and trace coherence, noting that the former implies logical soundness while the latter measures local coherence via lack of errors. Our results show that RL post-training overall improves trace coherence with the most significant gains on problems where the base model fails but the RL model succeeds. Surprisingly, RL enhances local coherence without necessarily producing valid or correct solutions. This highlights a crucial distinction: improved local coherence in reasoning steps does not guarantee final answer correctness. We argue that claims of improved reasoning via RL must be examined with care, as these may be based on improved trace coherence, which may not translate into fully valid mathematical proofs.
Problem

Research questions and friction points this paper is trying to address.

Investigating RLVR's effect on intermediate reasoning tokens in math domains
Measuring trace coherence versus validity in RL-trained LLM reasoning steps
Analyzing whether improved local coherence guarantees correct final answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces trace coherence metric using First-Order Logic
Uses GRPO algorithm for RL post-training on Qwen model
Distinguishes local coherence from global solution validity