🤖 AI Summary
Existing methods struggle with fine-grained, cross-page understanding of multi-page visual documents (e.g., manuals, PPTs), particularly in jointly modeling global document structure and local visual elements. To address this, we propose a hierarchical multi-agent framework featuring a three-tier reasoning architecture—global, page-level, and element-level—that dynamically activates specialized agents for integrated modeling of layout, color, icons, and cross-page references. Our approach generates structured representations without requiring predefined queries and incorporates multimodal input fusion with context-aware integration. Experiments on multi-page document understanding tasks demonstrate substantial improvements over baselines: +7.9 points over state-of-the-art closed-source models and +9.8 points over leading open-source models. These results validate the method’s accuracy, contextual consistency, and generalization capability across diverse document types and layouts.
📝 Abstract
Multi-page visual documents such as manuals, brochures, presentations, and posters convey key information through layout, colors, icons, and cross-slide references. While large language models (LLMs) offer opportunities in document understanding, current systems struggle with complex, multi-page visual documents, particularly in fine-grained reasoning over elements and pages. We introduce SlideAgent, a versatile agentic framework for understanding multi-modal, multi-page, and multi-layout documents, especially slide decks. SlideAgent employs specialized agents and decomposes reasoning into three specialized levels-global, page, and element-to construct a structured, query-agnostic representation that captures both overarching themes and detailed visual or textual cues. During inference, SlideAgent selectively activates specialized agents for multi-level reasoning and integrates their outputs into coherent, context-aware answers. Extensive experiments show that SlideAgent achieves significant improvement over both proprietary (+7.9 overall) and open-source models (+9.8 overall).