๐ค AI Summary
Traditional RAG methods struggle with visually rich documents (VRDs) by segmenting content into isolated text chunks, thereby discarding layout structure and cross-page dependencies; moreover, fixed-page retrieval limits evidence coverage in multi-page reasoning, degrading answer quality. This paper proposes a layout-aware dynamic retrieval framework: first, it constructs a symbolic document graph to explicitly model inter-page structural and semantic relationships; second, it introduces an LLM-agent collaboration mechanism that jointly leverages neural embeddings and symbolic graph topology for query-driven, adaptive evidence retrieval. Evaluated on multiple VRD question-answering benchmarks, our approach achieves >90% perfect recall, outperforms baseline retrieval by 20%, and significantly improves QA accuracyโwhile maintaining low latency.
๐ Abstract
Question answering over visually rich documents (VRDs) requires reasoning not only over isolated content but also over documents'structural organization and cross-page dependencies. However, conventional retrieval-augmented generation (RAG) methods encode content in isolated chunks during ingestion, losing structural and cross-page dependencies, and retrieve a fixed number of pages at inference, regardless of the specific demands of the question or context. This often results in incomplete evidence retrieval and degraded answer quality for multi-page reasoning tasks. To address these limitations, we propose LAD-RAG, a novel Layout-Aware Dynamic RAG framework. During ingestion, LAD-RAG constructs a symbolic document graph that captures layout structure and cross-page dependencies, adding it alongside standard neural embeddings to yield a more holistic representation of the document. During inference, an LLM agent dynamically interacts with the neural and symbolic indices to adaptively retrieve the necessary evidence based on the query. Experiments on MMLongBench-Doc, LongDocURL, DUDE, and MP-DocVQA demonstrate that LAD-RAG improves retrieval, achieving over 90% perfect recall on average without any top-k tuning, and outperforming baseline retrievers by up to 20% in recall at comparable noise levels, yielding higher QA accuracy with minimal latency.