🤖 AI Summary
Large vision-language models (VLMs) exhibit limited compositional generalization—especially in out-of-distribution, cross-modal, and cross-task settings—despite the strong compositional reasoning capabilities demonstrated by large language models (LLMs) after post-training (e.g., reinforcement learning).
Method: To address this, we propose: (1) a “describe-then-reason” architecture that decouples visual perception from logical reasoning; and (2) a progressive vision–text grounding reward mechanism that explicitly models alignment quality and fine-grained localization accuracy.
Contribution/Results: Reinforcement learning (RL) fine-tuning significantly outperforms supervised fine-tuning (SFT), yielding stable performance gains on compositional diagnostic benchmarks. Our analysis reveals, for the first time, that vision–text alignment fidelity and precise spatial grounding are two fundamental bottlenecks limiting compositional generalization in VLMs. The proposed framework offers a reproducible, scalable pathway for enhancing multimodal reasoning—bridging the gap between perceptual grounding and structured inference in VLMs.
📝 Abstract
While large language models (LLMs) demonstrate strong reasoning capabilities utilizing reinforcement learning (RL) with verifiable reward, whether large vision-language models (VLMs) can directly inherit such capabilities through similar post-training strategies remains underexplored. In this work, we conduct a systematic compositional probing study to evaluate whether current VLMs trained with RL or other post-training strategies can compose capabilities across modalities or tasks under out-of-distribution conditions. We design a suite of diagnostic tasks that train models on unimodal tasks or isolated reasoning skills, and evaluate them on multimodal, compositional variants requiring skill integration. Through comparisons between supervised fine-tuning (SFT) and RL-trained models, we identify three key findings: (1) RL-trained models consistently outperform SFT on compositional generalization, demonstrating better integration of learned skills; (2) although VLMs achieve strong performance on individual tasks, they struggle to generalize compositionally under cross-modal and cross-task scenario, revealing a significant gap in current training strategies; (3) enforcing models to explicitly describe visual content before reasoning (e.g., caption-before-thinking), along with rewarding progressive vision-to-text grounding, yields notable gains. It highlights two essential ingredients for improving compositionality in VLMs: visual-to-text alignment and accurate visual grounding. Our findings shed light on the current limitations of RL-based reasoning VLM training and provide actionable insights toward building models that reason compositionally across modalities and tasks.