🤖 AI Summary
Existing post-training methods for medical large language models (e.g., GRPO) optimize solely for answer accuracy, failing to jointly ensure reasoning faithfulness and completeness—critical limitations for high-stakes clinical deployment.
Method: We propose Clinical-Objective Relative Policy Optimization (CRPO), the first multi-objective reinforcement learning framework that jointly models accuracy, faithfulness, and completeness. CRPO introduces a rule-driven, annotation-free, verifiable reward mechanism enabling unsupervised multi-objective alignment.
Results: On three medical benchmarks, CRPO matches GRPO’s accuracy while substantially improving reasoning faithfulness (+18.3% on average) and completeness (+22.7%). The resulting 3B-parameter clinical reasoning model achieves verifiable, multi-objective optimization—establishing a new paradigm for trustworthy AI in high-risk domains.
📝 Abstract
Recent advances in large language models (LLMs) have shown strong reasoning capabilities through large-scale pretraining and post-training reinforcement learning, demonstrated by DeepSeek-R1. However, current post-training methods, such as Grouped Relative Policy Optimization (GRPO), mainly reward correctness, which is not aligned with the multi-dimensional objectives required in high-stakes fields such as medicine, where reasoning must also be faithful and comprehensive. We introduce Clinical-Objective Relative Policy Optimization (CRPO), a scalable, multi-objective, verifiable reinforcement learning method designed to align LLM post-training with clinical reasoning principles. CRPO integrates rule-based and verifiable reward signals that jointly optimize accuracy, faithfulness, and comprehensiveness without relying on human annotation. To demonstrate its effectiveness, we train Clinical-R1-3B, a 3B-parameter model for clinical reasoning. The experiments on three benchmarks demonstrate that our CRPO substantially improves reasoning on truthfulness and completeness over standard GRPO while maintaining comfortable accuracy enhancements. This framework provides a scalable pathway to align LLM reasoning with clinical objectives, enabling safer and more collaborative AI systems for healthcare while also highlighting the potential of multi-objective, verifiable RL methods in post-training scaling of LLMs for medical domains.