FILA: Fine-Grained Vision Language Models

πŸ“… 2024-12-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address semantic fragmentation caused by dynamic cropping in high-resolution image understanding, this paper proposes HyViLMβ€”the first model capable of globally modeling contextual information for arbitrary-resolution inputs. Its core contributions are: (1) a hybrid visual encoder that jointly represents local subgraphs and global fine-grained features; (2) a cross-layer optimal feature fusion mechanism enabling multi-granularity feature interaction under dynamic cropping constraints; and (3) a high-resolution-adapted vision-language alignment architecture. Evaluated on ten mainstream benchmarks, HyViLM achieves state-of-the-art (SOTA) performance on nine tasks, notably improving TextVQA by 9.6% and DocVQA by 6.9%, significantly outperforming existing multimodal large models.

Technology Category

Application Category

πŸ“ Abstract
Recently, there has been growing interest in the capability of multimodal large language models (MLLMs) to process high-resolution images. A common approach currently involves dynamically cropping the original high-resolution image into smaller sub-images, which are then fed into a vision encoder that was pre-trained on lower-resolution images. However, this cropping approach often truncates objects and connected areas in the original image, causing semantic breaks. To address this limitation, we introduce HyViLM, designed to process images of any resolution while retaining the overall context during encoding. Specifically, we: (i) Design a new visual encoder called Hybrid Encoder that not only encodes individual sub-images but also interacts with detailed global visual features, significantly improving the model's ability to encode high-resolution images. (ii) Propose an optimal feature fusion strategy for the dynamic cropping approach, effectively leveraging information from different layers of the vision encoder. Compared with the state-of-the-art MLLMs under the same setting, our HyViLM outperforms existing MLLMs in nine out of ten tasks. Specifically, HyViLM achieves a 9.6% improvement in performance on the TextVQA task and a 6.9% enhancement on the DocVQA task.
Problem

Research questions and friction points this paper is trying to address.

Address semantic breaks in high-resolution image processing
Enhance global context retention in vision encoding
Improve performance on multimodal language model tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Encoder integrates sub-images and global features
Optimal feature fusion enhances vision encoder layers
HyViLM processes any resolution without semantic breaks
πŸ”Ž Similar Papers
No similar papers found.