π€ AI Summary
This work addresses the susceptibility of multimodal large language models (MLLMs) to perceptual fragility and hallucination in complex visual scenes, as well as their reliance on static and costly training data. To this end, the authors propose the Adversarial Opponent Training (AOT) framework, which introduces self-play reinforcement learning into MLLM robustness training for the first time. AOT dynamically generates adversarial examples through the co-evolution of an image-editing attacker and an MLLM defender, establishing a scalable training loop. Combined with supervised fine-tuning and a large-scale adversarial dataset, AOT-SFT, the approach significantly enhances the modelβs perceptual robustness in complex scenarios and effectively suppresses hallucinations, demonstrating the effectiveness and scalability of the AOT framework.
π Abstract
Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) exhibit perceptual fragility when confronted with visually complex scenes. This weakness stems from a reliance on finite training datasets, which are prohibitively expensive to scale and impose a ceiling on model robustness. We introduce \textbf{AOT-SFT}, a large-scale adversarial dataset for bootstrapping MLLM robustness. Building on this, we propose \textbf{AOT (Adversarial Opponent Training)}, a self-play framework that forges MLLM robustness by creating its own training data. Our method orchestrates a co-evolution between an image-editing Attacker and a Defender MLLM, where the Attacker generates a diverse and dynamic curriculum of image manipulations, forcing the Defender to adapt and improve. Extensive experiments demonstrate that AOT enhances the Defender's perceptual robustness and reduces hallucinations, establishing a scalable paradigm for training more reliable MLLMs.