🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit substantial improvements in foundational capabilities but remain significantly misaligned with human preferences. To address this, we propose a “prompt–response–preference” triadic data construction paradigm grounded in human values, enabling systematic alignment. Based on this paradigm, we curate OmniAlign-V—a high-quality dataset of 200K multimodal samples—and introduce MM-AlignBench, the first dedicated benchmark for evaluating multimodal preference alignment. Our methodology integrates multimodal instruction engineering, fine-grained human preference modeling, and standardized annotation protocols, combined with supervised fine-tuning (SFT) and direct preference optimization (DPO). Extensive experiments demonstrate that our approach substantially enhances MLLMs’ alignment with human preferences while preserving or even improving performance on standard vision-language tasks such as VQA. All data, benchmarks, and code are publicly released.
📝 Abstract
Recent advancements in open-source multi-modal large language models (MLLMs) have primarily focused on enhancing foundational capabilities, leaving a significant gap in human preference alignment. This paper introduces OmniAlign-V, a comprehensive dataset of 200K high-quality training samples featuring diverse images, complex questions, and varied response formats to improve MLLMs' alignment with human preferences. We also present MM-AlignBench, a human-annotated benchmark specifically designed to evaluate MLLMs' alignment with human values. Experimental results show that finetuning MLLMs with OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO), significantly enhances human preference alignment while maintaining or enhancing performance on standard VQA benchmarks, preserving their fundamental capabilities. Our datasets, benchmark, code and checkpoints have been released at https://github.com/PhoenixZ810/OmniAlign-V.