🤖 AI Summary
To address robustness bottlenecks in large multimodal models (LMMs) concerning vision–text feature alignment and cross-modal generalization, this paper proposes an end-to-end vision–text sequential reconstruction framework. Methodologically, it introduces two key innovations: (1) a Directed-Tokens mechanism that explicitly models fine-grained correspondences between image regions and textual tokens; and (2) an Image-to-Response Guided loss function that jointly optimizes cross-modal sequence reconstruction during both pretraining and fine-tuning. These components significantly improve multimodal alignment accuracy and inference consistency. Empirically, the method achieves state-of-the-art performance across diverse benchmarks—including academic evaluation suites (e.g., MMBench, OCRBench) and instruction-following benchmarks (e.g., MM-Vet, Qwen-VL-Bench)—outperforming existing large language vision models comprehensively.
📝 Abstract
Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.