🤖 AI Summary
Complex reasoning traces generated by large reasoning models (LRMs)—involving planning, reflection, verification, and backtracking—exhibit intricate semantic structures that are difficult to parse and interpret. Method: This paper proposes ReasoningFlow, a unified modeling framework that formally represents reasoning traces as directed acyclic graphs (DAGs), explicitly defining node semantic types and edge logical relations to enable structural parsing. It further introduces subgraph pattern extraction and semantic structured representation techniques to support cross-model identification, visualization, and comparative analysis of reasoning behaviors. Contribution/Results: ReasoningFlow significantly enhances the interpretability, assessability, and controllability of reasoning processes for optimization. Empirical evaluation across multiple LRMs demonstrates its effectiveness and generalizability in capturing, analyzing, and comparing diverse reasoning strategies.
📝 Abstract
Large reasoning models (LRMs) generate complex reasoning traces with planning, reflection, verification, and backtracking. In this work, we introduce ReasoningFlow, a unified schema for analyzing the semantic structures of these complex traces. ReasoningFlow parses traces into directed acyclic graphs, enabling the characterization of distinct reasoning patterns as subgraph structures. This human-interpretable representation offers promising applications in understanding, evaluating, and enhancing the reasoning processes of LRMs.