🤖 AI Summary
Multimodal instruction alignment suffers from a lack of human preference data, unified alignment methodologies, and reliable evaluation frameworks. Method: We propose the first full-modality (text/image/audio/video) instruction alignment framework, featuring (i) a 200K-sample cross-modal human preference dataset; (ii) a language-feedback-driven unified alignment paradigm integrating RL with Language Feedback (RL-LF), multimodal preference modeling, a unified instruction encoder, and cross-modal reward modeling; and (iii) Eval-Anything—the first comprehensive multimodal capability benchmark. Results: Our framework significantly improves instruction-following performance across arbitrary input-output modality combinations, achieving an average +23.6% gain on Eval-Anything. All datasets, models, and code are publicly released, establishing foundational resources for multimodal alignment research.
📝 Abstract
Reinforcement learning from human feedback (RLHF) has proven effective in enhancing the instruction-following capabilities of large language models; however, it remains underexplored in the cross-modality domain. As the number of modalities increases, aligning all-modality models with human intentions -- such as instruction following -- becomes a pressing challenge. In this work, we make the first attempt to fine-tune all-modality models (i.e. input and output with any modality, also named any-to-any models) using human preference data across all modalities (including text, image, audio, and video), ensuring its behavior aligns with human intentions. This endeavor presents several challenges. First, there is no large-scale all-modality human preference data in existing open-source resources, as most datasets are limited to specific modalities, predominantly text and image. Secondly, the effectiveness of binary preferences in RLHF for post-training alignment in complex all-modality scenarios remains an unexplored area. Finally, there is a lack of a systematic framework to evaluate the capabilities of all-modality models, particularly regarding modality selection and synergy. To address these challenges, we propose the align-anything framework, which includes meticulously annotated 200k all-modality human preference data. Then, we introduce an alignment method that learns from unified language feedback, effectively capturing complex modality-specific human preferences and enhancing the model's instruction-following capabilities. Furthermore, to assess performance improvements in all-modality models after post-training alignment, we construct a challenging all-modality capability evaluation framework -- eval-anything. All data, models, and code frameworks have been open-sourced for the community. For more details, please refer to https://github.com/PKU-Alignment/align-anything.