🤖 AI Summary
This work investigates whether chain-of-thought (CoT) explanations generated by large reasoning models exhibit cross-model generalization—specifically, their ability to induce consistent behaviors across different models. We present the first systematic evaluation of behavioral consistency driven by CoT explanations across models, propose a sentence-level ensemble strategy to enhance such consistency, and analyze its relationship with human preferences and post-training via reinforcement learning. Our experiments demonstrate that CoT explanations generally improve behavioral alignment among diverse models, and this alignment shows a significant positive correlation with both human preference ratings and the effectiveness of reinforcement learning-based post-training. Furthermore, the proposed ensemble method effectively boosts the generalization capability of CoT explanations across models.
📝 Abstract
Large reasoning models (LRMs) produce a textual chain of thought (CoT) in the process of solving a problem, which serves as a potentially powerful tool to understand the problem by surfacing a human-readable, natural-language explanation. However, it is unclear whether these explanations generalize, i.e. whether they capture general patterns about the underlying problem rather than patterns which are esoteric to the LRM. This is a crucial question in understanding or discovering new concepts, e.g. in AI for science. We study this generalization question by evaluating a specific notion of generalizability: whether explanations produced by one LRM induce the same behavior when given to other LRMs. We find that CoT explanations often exhibit this form of generalization (i.e. they increase consistency between LRMs) and that this increased generalization is correlated with human preference rankings and post-training with reinforcement learning. We further analyze the conditions under which explanations yield consistent answers and propose a straightforward, sentence-level ensembling strategy that improves consistency. Taken together, these results prescribe caution when using LRM explanations to yield new insights and outline a framework for characterizing LRM explanation generalization.