Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment

📅 2024-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor task adaptability, low fine-grained visual recognition accuracy, and the degradation of general multimodal capabilities when integrating specialized vision modules into Multimodal Large Language Models (MLLMs), this paper proposes the Task Preference Optimization (TPO) framework. Its core is a differentiable task preference mechanism: learnable task tokens enable dynamic coupling between multiple vision task heads and the MLLM backbone; combined with vision-label-driven preference modeling and multi-task collaborative training, TPO achieves zero-shot simultaneous improvement across multiple fine-grained vision tasks without compromising general multimodal competence. Evaluated on VideoChat and LLaVA architectures, TPO improves overall multimodal benchmark performance by 14.6% and attains supervised state-of-the-art (SOTA) performance in zero-shot cross-task evaluation.

Technology Category

Application Category

📝 Abstract
Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals though they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often at the expense of overall multimodal performance. To address this issue and enhance MLLMs with visual tasks in a scalable fashion, we propose Task Preference Optimization (TPO), a novel method that utilizes differentiable task preferences derived from typical fine-grained visual tasks. TPO introduces learnable task tokens that establish connections between multiple task-specific heads and the MLLM. By leveraging rich visual labels during training, TPO significantly enhances the MLLM's multimodal capabilities and task-specific performance. Through multi-task co-training within TPO, we observe synergistic benefits that elevate individual task performance beyond what is achievable through single-task training methodologies. Our instantiation of this approach with VideoChat and LLaVA demonstrates an overall 14.6% improvement in multimodal performance compared to baseline models. Additionally, MLLM-TPO demonstrates robust zero-shot capabilities across various tasks, performing comparably to state-of-the-art supervised models. The code will be released at https://github.com/OpenGVLab/TPO
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Visual Tasks
Performance Degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task Preference Optimization
Multi-task Learning
Visual Detail Enhancement
🔎 Similar Papers
No similar papers found.
Z
Ziang Yan
Shanghai AI Laboratory, Zhejiang University
Zhilin Li
Zhilin Li
University of Science and Technology of China, Shanghai AI Laboratory
Yinan He
Yinan He
Shanghai Al Laboratory
Chenting Wang
Chenting Wang
Shanghai Jiao Tong University
Computer VisionVideo Understanding
Kunchang Li
Kunchang Li
ByteDance Seed
Video UnderstandingMultimodal Learning
Xinhao Li
Xinhao Li
Nanjing University
Video UnderstandingMultimodal LLMVision-Language Learning
X
Xiangyu Zeng
Nanjing University, Shanghai AI Laboratory
Zilei Wang
Zilei Wang
University of Science and Technology of China
Computer VisionDeep LearningPattern Recognition
Y
Yali Wang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shanghai AI Laboratory
Y
Yu Qiao
Shanghai AI Laboratory
L
Limin Wang
Nanjing University, Shanghai AI Laboratory
Y
Yi Wang
Shanghai AI Laboratory