Shorter, but Still Trustworthy? An Empirical Study of Chain-of-Thought Compression

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical yet overlooked trade-off in chain-of-thought (CoT) compression: while existing methods enhance reasoning efficiency, they often compromise model trustworthiness—encompassing safety, hallucination resistance, and multilingual robustness—due to insufficient evaluation frameworks. We systematically uncover this efficiency–trustworthiness tension, introduce a unified benchmark covering these three trust dimensions, and propose a normalized efficiency metric for fair comparison across compression techniques. To mitigate trust degradation without sacrificing compression gains, we further design an alignment-aware variant of Direct Preference Optimization (DPO). Experiments demonstrate that our approach reduces CoT length by 19.3% on average across reasoning benchmarks while significantly alleviating trustworthiness loss.
📝 Abstract
Long chain-of-thought (Long-CoT) reasoning models have motivated a growing body of work on compressing reasoning traces to reduce inference cost, yet existing evaluations focus almost exclusively on task accuracy and token savings. Trustworthiness properties, whether acquired or reinforced through post-training, are encoded in the same parameter space that compression modifies. This means preserving accuracy does not, a priori, guarantee preserving trustworthiness. We conduct the first systematic empirical study of how CoT compression affects model trustworthiness, evaluating multiple models of different scales along three dimensions: safety, hallucination resistance, and multilingual robustness. Under controlled comparisons, we find that CoT compression frequently introduces trustworthiness regressions and that different methods exhibit markedly different degradation profiles across dimensions. To enable fair comparison across bases, we propose a normalized efficiency score for each dimension that reveals how naïve scalar metrics can obscure trustworthiness trade-offs. As an existence proof, we further introduce an alignment-aware DPO variant that reduces CoT length by 19.3\% on reasoning benchmarks with substantially smaller trustworthiness loss. Our findings suggest that CoT compression should be optimized not only for efficiency but also for trustworthiness, treating both as equally important design constraints.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought compression
trustworthiness
safety
hallucination resistance
multilingual robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought Compression
Trustworthiness
Alignment-aware DPO
Hallucination Resistance
Normalized Efficiency Score
🔎 Similar Papers
No similar papers found.