InfiMed-ORBIT: Aligning LLMs on Open-Ended Complex Tasks via Rubric-Based Incremental Training

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of ambiguous, subjective, and hard-to-model reward signals in open-domain tasks—particularly high-stakes medical dialogues—this paper proposes ORBIT, an incremental reinforcement learning framework grounded in dynamically generated scoring rubrics. ORBIT requires no external medical knowledge or handcrafted rules; instead, it leverages synthetic dialogue data and rubric-guided fine-grained behavioral control to enable continual model improvement. Evaluated on Qwen3-4B-Instruct, ORBIT achieves a substantial performance gain on HealthBench-Hard—from 7.0 to 27.2—using only 2K samples, matching the state-of-the-art among models of comparable scale. Its core innovation lies in embedding interpretable, evolvable rubrics directly into the RL training loop, thereby establishing a general, controllable, and low-data-dependent optimization paradigm for reward-sparse or reward-ambiguous settings.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown substantial advances through reinforcement learning (RL), particularly in domains where rewards can be programmatically verified, such as mathematics and code. In these areas, models benefit from a well-defined operational base guided by explicit rule-based objectives. However, this progress reveals a significant limitation: in open-ended domains where rewards are ambiguous, subjective, or context-dependent, such as creative writing, scientific reasoning, and notably medical consultation, robust reward functions are lacking, making these areas challenging for current RL strategies. To bridge this gap, we introduce ORBIT, an open-ended rubric-based incremental training framework specifically designed for high-stakes medical dialogue. ORBIT integrates syn- thetic dialogue generation with the dynamic creation of rubrics, employing these rubrics to direct an incremental RL process. In particular, this approach does not depend on external medical knowledge or manual rules, instead utilizing rubric-guided feedback to shape learning. When implemented on the Qwen3-4B-Instruct model, our method can greatly enhance its performance on the HealthBench-Hard benchmark from 7.0 to 27.2 using only 2k samples, thus achieving state-of-the-art results for models of this scale. Our analysis confirms that rubric-driven RL fos-ters consistent performance gains across diverse consultation scenarios, going beyond simple numerical improvements. These findings underscore rubric-based feedback as a scalable strategy for advancing LLMs in intricate, open-ended tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses ambiguous rewards in open-ended LLM tasks
Develops rubric-based RL for medical dialogue training
Enhances model performance on complex health benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses rubric-based incremental training framework
Integrates synthetic dialogue generation with rubrics
Employs rubric-guided feedback without external knowledge
🔎 Similar Papers
No similar papers found.