🤖 AI Summary
In vision grounding, reinforcement learning–driven chain-of-thought (CoT) prompt fine-tuning often suffers from performance degradation due to excessively long reasoning chains or imbalanced data complexity; merely scaling up training data proves insufficient. To address this, we propose CuRPO—a novel curriculum-based relative policy optimization framework that jointly models reasoning chain length and generalized Intersection-over-Union (gIoU) reward. CuRPO progressively trains the model on samples ordered by difficulty, guiding it from simple to complex referring expressions. It leverages CoT to generate interpretable intermediate reasoning steps and employs sparse gIoU feedback for policy optimization. Evaluated on four major benchmarks—including RefCOCO—CuRPO achieves up to +12.52 mAP improvement. Moreover, it demonstrates significantly enhanced robustness under few-shot settings. Our results empirically validate that curriculum-driven control over reasoning chain length is critical for improving grounding accuracy.
📝 Abstract
Chain-of-Thought (CoT) prompting has recently shown significant promise across various NLP and computer vision tasks by explicitly generating intermediate reasoning steps. However, we find that reinforcement learning (RL)-based fine-tuned CoT reasoning can paradoxically degrade performance in Visual Grounding tasks, particularly as CoT outputs become lengthy or complex. Additionally, our analysis reveals that increased dataset size does not always enhance performance due to varying data complexities. Motivated by these findings, we propose Curriculum-based Relative Policy Optimization (CuRPO), a novel training strategy that leverages CoT length and generalized Intersection over Union (gIoU) rewards as complexity indicators to progressively structure training data from simpler to more challenging examples. Extensive experiments on RefCOCO, RefCOCO+, RefCOCOg, and LISA datasets demonstrate the effectiveness of our approach. CuRPO consistently outperforms existing methods, including Visual-RFT, with notable improvements of up to +12.52 mAP on RefCOCO. Moreover, CuRPO exhibits exceptional efficiency and robustness, delivering strong localization performance even in few-shot learning scenarios, particularly benefiting tasks characterized by ambiguous and intricate textual descriptions.The code is released on https://github.com/qyoung-yan/CuRPO.