AVG-LLaVA: A Large Multimodal Model with Adaptive Visual Granularity

πŸ“… 2024-09-20
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 4
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address inefficient inference caused by visual token redundancy in high-resolution image understanding, this paper proposes an adaptive visual granularity mechanism. Our method introduces a novel joint architecture comprising a visual granularity scaler and a router to enable dynamic multi-scale feature abstraction. We further propose Reward-Guided Latent Fine-tuning (RGLF), a self-supervised training paradigm that aligns granularity selection with model semantic preferences end-to-endβ€”without requiring human annotations. Integrated into the LLaVA-NeXT framework, our approach incorporates hierarchical pooling, a Transformer-MLP-Voter routing module, and RGLF-based alignment. Evaluated on 11 mainstream benchmarks, it achieves state-of-the-art performance. On AI2D, it reduces visual tokens by 85.3% and accelerates inference by 2.53Γ—, demonstrating a significant improvement in both efficiency and accuracy.

Technology Category

Application Category

πŸ“ Abstract
Recently, when dealing with high-resolution images, dominant LMMs usually divide them into multiple local images and one global image, which will lead to a large number of visual tokens. In this work, we introduce AVG-LLaVA, an LMM that can adaptively select the appropriate visual granularity based on the input image and instruction. This approach not only reduces the number of visual tokens and speeds up inference, but also improves the overall model performance. Specifically, we introduce the following modules based on LLaVA-NeXT: (a) a visual granularity scaler that includes multiple pooling layers to obtain visual tokens with different granularities; (b) a visual granularity router, which includes a Transformer layer, an MLP layer, and a voter layer, used to select the appropriate visual granularity based on the image and instruction. Furthermore, we propose RGLF, a novel training paradigm that aims at aligning the granularity predicted by the router with the preferences of the LMM, without the need for additional manually annotated data. Extensive experiments and analysis show that AVG-LLaVA achieves superior performance across 11 benchmarks, as well as significantly reduces the number of visual tokens and speeds up inference (e.g., an 85.3% reduction in visual tokens and a 2.53$ imes$ increase in inference speed on the AI2D benchmark).
Problem

Research questions and friction points this paper is trying to address.

Reduces visual tokens in multimodal models efficiently
Adapts visual granularity based on input dynamically
Enhances inference speed without manual annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive visual granularity selection method
Visual granularity router with Transformer and MLP
RGLF training paradigm for granularity alignment
πŸ”Ž Similar Papers
No similar papers found.