๐ค AI Summary
To address the inherent trade-off between local detail preservation and global context modeling in high-resolution image understanding, this paper proposes a dynamic image patching strategy that jointly optimizes fine-grained perception and semantic integration. Methodologically, we reconstruct the training pipeline of the Monkey VLM by introducing dynamic patch encoding, cross-patch attention mechanisms, and global feature concatenation. We present the first systematic empirical validation of patchingโs efficacy for visual restoration, uncovering a novel phenomenon: global context gains exhibit significant non-monotonic variation with respect to both task type and patch granularity. Evaluated on multiple high-resolution visual question answering (VQA) and grounding benchmarks, our model reproduces and surpasses the original modelโs fine-detail recognition capability. Incorporating global context yields an average accuracy improvement of 3.2% across tasks, with the most pronounced gains observed in medium-granularity tasks.
๐ Abstract
Reproducibility remains a cornerstone of scientific progress, yet complex multimodal models often lack transparent implementation details and accessible training infrastructure. In this work, we present a detailed reproduction and critical analysis of the Monkey Vision-Language Model (VLM) (Li et al. 2023b) published in CVPR24, a recent approach to high-resolution image understanding via image tiling. The original paper proposed splitting large images into tiles to recover fine-grained visual details while maintaining computational efficiency. Our study replicates this strategy using open checkpoints and reimplements the training pipeline. We confirm the key finding of the original Monkey VLM work, namely that tiling effectively recovers local details. We then extend this work further, by investigating the effect of the inclusion of the global context, which provide practical insights for future high-resolution multimodal modeling. However, we also report deviations in the results, with the magnitude of these effects depending heavily on task type and tile granularity.