🤖 AI Summary
To address poor task adaptability, low fine-grained visual recognition accuracy, and the degradation of general multimodal capabilities when integrating specialized vision modules into Multimodal Large Language Models (MLLMs), this paper proposes the Task Preference Optimization (TPO) framework. Its core is a differentiable task preference mechanism: learnable task tokens enable dynamic coupling between multiple vision task heads and the MLLM backbone; combined with vision-label-driven preference modeling and multi-task collaborative training, TPO achieves zero-shot simultaneous improvement across multiple fine-grained vision tasks without compromising general multimodal competence. Evaluated on VideoChat and LLaVA architectures, TPO improves overall multimodal benchmark performance by 14.6% and attains supervised state-of-the-art (SOTA) performance in zero-shot cross-task evaluation.
📝 Abstract
Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals though they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often at the expense of overall multimodal performance. To address this issue and enhance MLLMs with visual tasks in a scalable fashion, we propose Task Preference Optimization (TPO), a novel method that utilizes differentiable task preferences derived from typical fine-grained visual tasks. TPO introduces learnable task tokens that establish connections between multiple task-specific heads and the MLLM. By leveraging rich visual labels during training, TPO significantly enhances the MLLM's multimodal capabilities and task-specific performance. Through multi-task co-training within TPO, we observe synergistic benefits that elevate individual task performance beyond what is achievable through single-task training methodologies. Our instantiation of this approach with VideoChat and LLaVA demonstrates an overall 14.6% improvement in multimodal performance compared to baseline models. Additionally, MLLM-TPO demonstrates robust zero-shot capabilities across various tasks, performing comparably to state-of-the-art supervised models. The code will be released at https://github.com/OpenGVLab/TPO