🤖 AI Summary
Current vision-language models (VLMs) underperform on document understanding and web agent tasks due to insufficient structural and spatial awareness in their underlying visual features. To address this, we propose DAVE—a domain-specialized visual encoder. First, DAVE leverages self-supervised pretraining on large-scale unlabeled document and web page images to learn geometric and layout priors. Second, it employs supervised autoregressive modeling to enhance fine-grained localization and parsing capabilities. Third, it introduces a novel cross-text-decoder model fusion mechanism, combined with hybrid feature distillation from general-purpose encoders (e.g., SigLIP2), balancing domain specificity and generalization. Evaluated across document understanding, visual question answering, web element localization, and web agent benchmarks, DAVE consistently outperforms state-of-the-art VLM visual encoders. It is the highest-performing open-source visual encoder specifically designed for document and web page understanding.
📝 Abstract
While Vision-language models (VLMs) have demonstrated remarkable performance across multi-modal tasks, their choice of vision encoders presents a fundamental weakness: their low-level features lack the robust structural and spatial information essential for document understanding and web agents. To bridge this gap, we introduce DAVE, a vision encoder purpose-built for VLMs and tailored for these tasks. Our training pipeline is designed to leverage abundant unlabeled data to bypass the need for costly large-scale annotations for document and web images. We begin with a self-supervised pretraining stage on unlabeled images, followed by a supervised autoregressive pretraining stage, where the model learns tasks like parsing and localization from limited, high-quality data. Within the supervised stage, we adopt two strategies to improve our encoder's alignment with both general visual knowledge and diverse document and web agentic tasks: (i) We introduce a novel model-merging scheme, combining encoders trained with different text decoders to ensure broad compatibility with different web agentic architectures. (ii) We use ensemble training to fuse features from pretrained generalist encoders (e.g., SigLIP2) with our own document and web-specific representations. Extensive experiments on classic document tasks, VQAs, web localization, and agent-based benchmarks validate the effectiveness of our approach, establishing DAVE as a strong vision encoder for document and web applications.