π€ AI Summary
Current large language models perform reasoning over unstructured text, leading to high semantic evaluation overhead, coarse-grained supervision, susceptibility to reward hacking, and limited generalization. To address these limitations, this work proposes the Graph-based Reasoning Paradigm (GRP), which introduces graph structures annotated with step-level cognitive labels into the reasoning process of large language models for the first time, enabling fine-grained, verifiable, structured, and symbolic reasoning. To support this paradigm, we design the Process-Aware Structured Clipping Generalized Relative Policy Optimization (PASC-GRPO) algorithm, which replaces conventional semantic evaluation with topology-aware reinforcement learning and a structured reward mechanism. Experiments on mathematical reasoning and code generation tasks demonstrate significant performance improvements, validating the effectiveness and scalability of the proposed approach.
π Abstract
Long Chain-of-Thought (LCoT), achieved by Reinforcement Learning with Verifiable Rewards (RLVR), has proven effective in enhancing the reasoning capabilities of Large Language Models (LLMs). However, reasoning in current LLMs is primarily generated as plain text, where performing semantic evaluation on such unstructured data creates a computational bottleneck during training. Despite RLVR-based optimization, existing methods still suffer from coarse-grained supervision, reward hacking, high training costs, and poor generalization. To address these issues, we propose the Graph Reasoning Paradigm (GRP), which realizes structured and symbolic reasoning, implemented via graph-structured representations with step-level cognitive labels. Building upon GRP, we further design Process-Aware Stratified Clipping Group Relative Policy Optimization (PASC-GRPO), which leverages structured evaluation to replace semantic evaluation, achieves process-aware verification through graph-structured outcome rewards, and mitigates reward hacking via stratified clipping advantage estimation. Experiments demonstrate significant improvements across mathematical reasoning and code generation tasks. Data, models, and code will be released later.