VITAL: Vision-Encoder-centered Pre-training for LMMs in Visual Quality Assessment

πŸ“… 2025-11-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision quality assessment large multimodal models (VQualA LMMs) are predominantly single-task architectures relying on full-parameter fine-tuning, resulting in limited generalization and cross-modal transfer capability. Method: We propose a unified architecture centered on a vision encoder, yielding the VITAL-Seriesβ€”large models supporting both image/video quality scoring and natural-language explanation. We introduce a machine-annotation auditing paradigm to construct a vision-language dataset exceeding 4.5 million samples. Furthermore, we design an efficient vision-encoder-based model library expansion mechanism, integrating generative pretraining with lightweight decoder fine-tuning. Contribution/Results: Our approach achieves strong zero-shot performance and ultra-low-data adaptation: merely 0.1% of training data suffices to activate the decoder and match full-data fine-tuning performance. Extensive experiments demonstrate significant improvements in generalization and cross-task transfer across diverse multimodal quality assessment benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Developing a robust visual quality assessment (VQualA) large multi-modal model (LMM) requires achieving versatility, powerfulness, and transferability. However, existing VQualA LMMs typically focus on a single task and rely on full-parameter fine-tuning, which makes them prone to overfitting on specific modalities or task types, thereby limiting their generalization capacity and transferability. To address this, we propose a vision-encoder-centered generative pre-training pipeline and develop the VITAL-Series LMMs. (1) We adopt a machine-executed annotation-scrutiny paradigm, constructing over 4.5M vision-language (VL) pairs-the largest VQualA training dataset to date. (2) We employ a multi-task training workflow that simultaneously enhances the model's quantitative scoring precision and strengthens its capability for quality interpretation across both image and video modalities. (3) Building upon the vision encoder, we realize an efficient model zoo extension: the model zoo exhibits strong zero-shot performance, and each paired decoder requires only a swift warm-up using less than 1/1000 of the pre-training data to achieve performance comparable to the fully trained counterpart. Overall, our work lays a cornerstone for advancing toward the foundation LMM for VQualA.
Problem

Research questions and friction points this paper is trying to address.

Existing VQualA LMMs overfit due to single-task focus
Current models lack generalization across modalities and tasks
Full-parameter fine-tuning limits transferability in visual assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-encoder-centered generative pre-training pipeline
Machine-executed annotation-scrutiny paradigm for dataset
Multi-task training enhancing scoring and interpretation capabilities
πŸ”Ž Similar Papers
No similar papers found.