Modality-Balancing Preference Optimization of Large Multimodal Models by Adversarial Negative Mining

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) suffer from modality imbalance, where strong linguistic priors overwhelm visual inputs, leading to poor generalization and frequent hallucinations. Existing preference optimization methods struggle to suppress inherent biases in the LLM backbone and rely on static offline data, lacking adaptability to distributional shifts during training; although Generalized Reinforcement Learning from Preference Optimization (GRPO) has shown efficacy in unimodal reasoning alignment, it remains unexplored for LMM alignment. This paper proposes Modality-Balanced Preference Optimization (MBPO), a novel framework that integrates offline adversarial negative sampling—generating hard negative samples biased toward language priors via image perturbations—with online GRPO-based validation rewards. MBPO further introduces closed-form task reward evaluation, multi-stage preference data construction, and joint optimization. Extensive experiments demonstrate significant performance gains across multiple benchmarks, substantial hallucination reduction, and consistent superiority over state-of-the-art preference optimization approaches.

Technology Category

Application Category

📝 Abstract
The task adaptation and alignment of Large Multimodal Models (LMMs) have been significantly advanced by instruction tuning and further strengthened by recent preference optimization. Yet, most LMMs still suffer from severe modality imbalance during reasoning, i.e., outweighing language prior biases over visual inputs, which bottlenecks their generalization to downstream tasks and causes hallucinations. However, existing preference optimization approaches for LMMs do not focus on restraining the internal biases of their Large Language Model (LLM) backbones when curating the training data. Moreover, they heavily rely on offline data and lack the capacity to explore diverse responses adaptive to dynamic distributional shifts during training. Meanwhile, Group Relative Policy Optimization (GRPO), a recent method using online-generated data and verified rewards to improve reasoning capabilities, remains largely underexplored in LMM alignment. In this paper, we propose a novel preference learning framework, Modality-Balancing Preference Optimization (MBPO), to address the modality imbalance in LMMs. MBPO constructs a more effective offline preference dataset by generating hard negatives, i.e., rejected responses misled by LLM biases due to limited usage of visual information, through adversarial perturbation of input images. Moreover, MBPO leverages the easy-to-verify nature of close-ended tasks to generate online responses with verified rewards. GRPO is then employed to train the model with offline-online hybrid data. Extensive experiments demonstrate that MBPO can enhance LMM performance on challenging vision-language tasks and effectively reduce hallucinations.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality imbalance in Large Multimodal Models (LMMs)
Reduces language bias over visual inputs in LMMs
Improves LMM generalization and reduces hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial negative mining balances modality preferences
Online-offline hybrid data enhances dynamic adaptation
GRPO optimizes multimodal alignment with verified rewards
🔎 Similar Papers
No similar papers found.
C
Chenxi Liu
University of Maryland, College Park
T
Tianyi Xiong
University of Maryland, College Park
R
Ruibo Chen
University of Maryland, College Park
Y
Yihan Wu
University of Maryland, College Park
Junfeng Guo
Junfeng Guo
University of Maryland, College Park
Trustworthy Machine LearningComputer VisionNature Language Processing
T
Tianyi Zhou
University of Maryland, College Park
Heng Huang
Heng Huang
Brendan Iribe Endowed Professor in Computer Science, University Maryland College Park
Machine LearningAIBiomedical Data ScienceComputer Vision