🤖 AI Summary
Existing large language models (LLMs) face two key challenges in solving operations research (OR) problems: (1) outcome-oriented rewards induce credit assignment bias, hindering correction of flawed reasoning; and (2) discriminative process supervision operates only locally, failing to ensure global logical coherence in problem modeling. This paper proposes a generative process supervision framework, jointly optimizing a policy model and a Generative Process Reward Model (GPRM) via co-evolution. GPRM leverages external OR solvers to perform global, fine-grained evaluation of complete reasoning chains. Training proceeds via weighted direct preference optimization (DPO). Evaluated on six OR benchmarks, an 8B-parameter model substantially outperforms larger general-purpose LLMs and specialized baselines. Moreover, GPRM demonstrates strong cross-model transferability, effectively enhancing the reasoning quality of other LLMs.
📝 Abstract
Large Language Models (LLMs) have shown promising capabilities for solving Operations Research (OR) problems. While reinforcement learning serves as a powerful paradigm for LLM training on OR problems, existing works generally face two key limitations. First, outcome reward suffers from the credit assignment problem, where correct final answers can reinforce flawed reasoning. Second, conventional discriminative process supervision is myopic, failing to evaluate the interdependent steps of OR modeling holistically. To this end, we introduce StepORLM, a novel self-evolving framework with generative process supervision. At its core, StepORLM features a co-evolutionary loop where a policy model and a generative process reward model (GenPRM) iteratively improve on each other. This loop is driven by a dual-feedback mechanism: definitive, outcome-based verification from an external solver, and nuanced, holistic process evaluation from the GenPRM. The combined signal is used to align the policy via Weighted Direct Preference Optimization (W-DPO) and simultaneously refine the GenPRM. Our resulting 8B-parameter StepORLM establishes a new state-of-the-art across six benchmarks, significantly outperforming vastly larger generalist models, agentic methods, and specialized baselines. Moreover, the co-evolved GenPRM is able to act as a powerful and universally applicable process verifier, substantially boosting the inference scaling performance of both our own model and other existing LLMs.