🤖 AI Summary
This work addresses the lack of controllability and interpretability in chain-of-thought (CoT) reasoning of large language models. Methodologically, it introduces the first bottom-up CoT analysis and steering framework: leveraging unsupervised semantic embeddings and hierarchical clustering to automatically discover and characterize diverse reasoning patterns; generating interpretable, contrastive evaluation rubrics; and integrating human-in-the-loop assessment with format-sensitivity analysis for prior-free reasoning-path prediction and controllable guidance. Key contributions include: (1) the first empirical demonstration that training data formatting critically shapes CoT behavior; (2) overcoming limitations of predefined strategies, thereby substantially improving both interpretability and coverage of reasoning analysis; and (3) achieving improved accuracy in reasoning-strategy prediction across multiple tasks and enhancing model reasoning performance via format-aware prompting.
📝 Abstract
Long chain-of-thought (CoT) is an essential ingredient in effective usage of modern large language models, but our understanding of the reasoning strategies underlying these capabilities remains limited. While some prior works have attempted to categorize CoTs using predefined strategy types, such approaches are constrained by human intuition and fail to capture the full diversity of model behaviors. In this work, we introduce the CoT Encyclopedia, a bottom-up framework for analyzing and steering model reasoning. Our method automatically extracts diverse reasoning criteria from model-generated CoTs, embeds them into a semantic space, clusters them into representative categories, and derives contrastive rubrics to interpret reasoning behavior. Human evaluations show that this framework produces more interpretable and comprehensive analyses than existing methods. Moreover, we demonstrate that this understanding enables performance gains: we can predict which strategy a model is likely to use and guide it toward more effective alternatives. Finally, we provide practical insights, such as that training data format (e.g., free-form vs. multiple-choice) has a far greater impact on reasoning behavior than data domain, underscoring the importance of format-aware model design.