Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address robustness bottlenecks in large multimodal models (LMMs) concerning vision–text feature alignment and cross-modal generalization, this paper proposes an end-to-end vision–text sequential reconstruction framework. Methodologically, it introduces two key innovations: (1) a Directed-Tokens mechanism that explicitly models fine-grained correspondences between image regions and textual tokens; and (2) an Image-to-Response Guided loss function that jointly optimizes cross-modal sequence reconstruction during both pretraining and fine-tuning. These components significantly improve multimodal alignment accuracy and inference consistency. Empirically, the method achieves state-of-the-art performance across diverse benchmarks—including academic evaluation suites (e.g., MMBench, OCRBench) and instruction-following benchmarks (e.g., MM-Vet, Qwen-VL-Bench)—outperforming existing large language vision models comprehensively.

Technology Category

Application Category

📝 Abstract
Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving robust alignment between visual and textual modalities
Solving shuffling problems in multi-modality understanding
Enhancing reasoning capability and visual-textual correlation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstructing image and text order tasks
Directed-token approach for visual-textual knowledge
Image-to-Response Guided loss enhancement
🔎 Similar Papers
No similar papers found.