🤖 AI Summary
Existing multimodal large language models (MLLMs) are predominantly designed for single-task, single-modality settings, hindering cross-task and cross-modal (e.g., image-text, video) knowledge sharing and general-purpose reasoning.
Method: We propose OneThinker, a unified multimodal visual reasoning framework. It introduces OneThinker-600k—a large-scale, multi-task dataset covering ten vision tasks including question answering, captioning, spatiotemporal localization, tracking, and segmentation. To address reward heterogeneity in multi-task reinforcement learning (RL), we design EMA-GRPO, an RL algorithm enabling effective knowledge transfer and zero-shot generalization. We leverage commercial models to generate chain-of-thought annotations and combine supervised fine-tuning (SFT) initialization with RL optimization.
Contribution/Results: OneThinker achieves state-of-the-art performance across 31 diverse visual benchmarks, demonstrating superior generalization, cross-task collaborative optimization, and promising zero-shot reasoning capabilities.
📝 Abstract
Reinforcement learning (RL) has recently achieved remarkable success in eliciting visual reasoning within Multimodal Large Language Models (MLLMs). However, existing approaches typically train separate models for different tasks and treat image and video reasoning as disjoint domains. This results in limited scalability toward a multimodal reasoning generalist, which restricts practical versatility and hinders potential knowledge sharing across tasks and modalities. To this end, we propose OneThinker, an all-in-one reasoning model that unifies image and video understanding across diverse fundamental visual tasks, including question answering, captioning, spatial and temporal grounding, tracking, and segmentation. To achieve this, we construct the OneThinker-600k training corpus covering all these tasks and employ commercial models for CoT annotation, resulting in OneThinker-SFT-340k for SFT cold start. Furthermore, we propose EMA-GRPO to handle reward heterogeneity in multi-task RL by tracking task-wise moving averages of reward standard deviations for balanced optimization. Extensive experiments on diverse visual benchmarks show that OneThinker delivers strong performance on 31 benchmarks, across 10 fundamental visual understanding tasks. Moreover, it exhibits effective knowledge transfer between certain tasks and preliminary zero-shot generalization ability, marking a step toward a unified multimodal reasoning generalist. All code, model, and data are released.