StepORLM: A Self-Evolving Framework With Generative Process Supervision For Operations Research Language Models

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) face two key challenges in solving operations research (OR) problems: (1) outcome-oriented rewards induce credit assignment bias, hindering correction of flawed reasoning; and (2) discriminative process supervision operates only locally, failing to ensure global logical coherence in problem modeling. This paper proposes a generative process supervision framework, jointly optimizing a policy model and a Generative Process Reward Model (GPRM) via co-evolution. GPRM leverages external OR solvers to perform global, fine-grained evaluation of complete reasoning chains. Training proceeds via weighted direct preference optimization (DPO). Evaluated on six OR benchmarks, an 8B-parameter model substantially outperforms larger general-purpose LLMs and specialized baselines. Moreover, GPRM demonstrates strong cross-model transferability, effectively enhancing the reasoning quality of other LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown promising capabilities for solving Operations Research (OR) problems. While reinforcement learning serves as a powerful paradigm for LLM training on OR problems, existing works generally face two key limitations. First, outcome reward suffers from the credit assignment problem, where correct final answers can reinforce flawed reasoning. Second, conventional discriminative process supervision is myopic, failing to evaluate the interdependent steps of OR modeling holistically. To this end, we introduce StepORLM, a novel self-evolving framework with generative process supervision. At its core, StepORLM features a co-evolutionary loop where a policy model and a generative process reward model (GenPRM) iteratively improve on each other. This loop is driven by a dual-feedback mechanism: definitive, outcome-based verification from an external solver, and nuanced, holistic process evaluation from the GenPRM. The combined signal is used to align the policy via Weighted Direct Preference Optimization (W-DPO) and simultaneously refine the GenPRM. Our resulting 8B-parameter StepORLM establishes a new state-of-the-art across six benchmarks, significantly outperforming vastly larger generalist models, agentic methods, and specialized baselines. Moreover, the co-evolved GenPRM is able to act as a powerful and universally applicable process verifier, substantially boosting the inference scaling performance of both our own model and other existing LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses credit assignment flaws in outcome-based reinforcement learning
Overcomes myopic limitations of discriminative process supervision methods
Develops co-evolutionary framework integrating policy models with generative rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-evolving framework with generative process supervision
Co-evolutionary loop between policy and reward model
Combined outcome and process feedback for alignment
🔎 Similar Papers
No similar papers found.