CPPO: Contrastive Perception for Vision Language Policy Optimization

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal reinforcement learning approaches struggle to effectively disentangle perception and reasoning in vision-language models, often relying on auxiliary models, annotated data, or uniform reward assignment across all output tokens—leading to inefficiency or suboptimal performance. This work proposes a self-supervised method that requires neither additional models nor labeled data. By analyzing entropy changes in output tokens under input image perturbations, the method automatically identifies perception-related tokens and introduces a Contrastive Perception Loss (CPL): it enforces output consistency under perturbations that preserve semantic content while enhancing sensitivity to perturbations that remove critical information. This approach enables joint optimization of perception and reasoning, significantly outperforming existing perception-aware reward methods in multimodal tasks, with improved training efficiency and strong scalability.

Technology Category

Application Category

📝 Abstract
We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
reinforcement learning
perception-reasoning disentanglement
multimodal reasoning
perception reward
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Perception
Vision-Language Models
Reinforcement Learning
Perception-Reward Disentanglement
Entropy-based Token Detection
🔎 Similar Papers
No similar papers found.