🤖 AI Summary
This work addresses the misalignment between sub-claim decomposition quality and verification performance in complex claim verification. We propose the first joint optimization framework based on Group Relative Policy Optimization (GRPO), which explicitly aligns the decomposition process with verification objectives through end-to-end co-optimization. Our approach integrates structured sequential reasoning, teacher-distillation fine-tuning, and a multi-objective reward mechanism. Evaluated across six settings, an 8B-parameter decomposer achieves a macro F1 score of 71.75%, significantly outperforming existing prompt engineering and reinforcement learning baselines. Human evaluations further confirm the high quality of the generated sub-claims, demonstrating that even small verification models can attain state-of-the-art performance when paired with our decomposition strategy.
📝 Abstract
Complex claim verification requires decomposing sentences into verifiable subclaims, yet existing methods struggle to align decomposition quality with verification performance. We propose a reinforcement learning (RL) approach that jointly optimizes decomposition quality and verifier alignment using Group Relative Policy Optimization (GRPO). Our method integrates: (i) structured sequential reasoning; (ii) supervised finetuning on teacher-distilled exemplars; and (iii) a multi-objective reward balancing format compliance, verifier alignment, and decomposition quality. Across six evaluation settings, our trained 8B decomposer improves downstream verification performance to (71.75%) macro-F1, outperforming prompt-based approaches ((+1.99), (+6.24)) and existing RL methods ((+5.84)). Human evaluation confirms the high quality of the generated subclaims. Our framework enables smaller language models to achieve state-of-the-art claim verification by jointly optimising for verification accuracy and decomposition quality.