🤖 AI Summary
This work addresses the inefficiency of multimodal large language models in key information extraction tasks, where autoregressive inference hinders parallel processing of semantically independent fields. To overcome this limitation, the authors propose PIP (Parallel Inference Paradigm), a novel framework that introduces a [mask]-based generation mechanism to simultaneously predict all target fields in a single forward pass. By integrating tailored masked pretraining with large-scale supervised data, PIP achieves substantial inference speedups—ranging from 5× to 36×—while preserving near-perfect accuracy. This paradigm significantly enhances deployment efficiency in real-world applications without compromising model performance.
📝 Abstract
Key Information Extraction (KIE) from visually-rich documents (VrDs) is a critical task, for which recent Large Language Models (LLMs) and Multi-Modal Large Language Models (MLLMs) have demonstrated strong potential. However, their reliance on autoregressive inference, which generates outputs sequentially, creates a significant efficiency bottleneck, especially as KIE tasks often involve extracting multiple, semantically independent fields. To overcome this limitation, we introduce PIP: a Parallel Inference Paradigm for KIE. Our approach reformulates the problem by using"[mask]"tokens as placeholders for all target values, enabling their simultaneous generation in a single forward pass. To facilitate this paradigm, we develop a tailored mask pre-training strategy and construct large-scale supervised datasets. Experimental results show that our PIP-models achieve a 5-36x inference speedup with negligible performance degradation compared to traditional autoregressive base models. By substantially improving efficiency while maintaining high accuracy, PIP paves the way for scalable and practical real-world KIE solutions.