🤖 AI Summary
This work addresses the critical bottleneck in document intelligence—namely, the scarcity of large-scale, high-quality annotated data and the prohibitive cost of manual labeling—by proposing the first unified technical framework for data generation in this domain. It reconceptualizes data generation as supervised signal production and introduces a resource-oriented taxonomy grounded in the availability of data and labels, encompassing four paradigms: data augmentation, zero-shot generation, automatic annotation, and self-supervised signal construction. The framework integrates fragmented research efforts through a novel classification system and a multi-level evaluation protocol that jointly assesses intrinsic quality and extrinsic utility. Empirical validation across multiple document intelligence benchmarks demonstrates the effectiveness of the surveyed methods, while also highlighting key challenges such as fidelity gaps and emerging directions like co-evolutionary ecosystems, thereby establishing data generation as a foundational engine for next-generation document intelligence.
📝 Abstract
The advancement of Document Intelligence (DI) demands large-scale, high-quality training data, yet manual annotation remains a critical bottleneck. While data generation methods are evolving rapidly, existing surveys are constrained by fragmented focuses on single modalities or specific tasks, lacking a unified perspective aligned with real-world workflows. To fill this gap, this survey establishes the first comprehensive technical map for data generation in DI. Data generation is redefined as supervisory signal production, and a novel taxonomy is introduced based on the"availability of data and labels."This framework organizes methodologies into four resource-centric paradigms: Data Augmentation, Data Generation from Scratch, Automated Data Annotation, and Self-Supervised Signal Construction. Furthermore, a multi-level evaluation framework is established to integrate intrinsic quality and extrinsic utility, compiling performance gains across diverse DI benchmarks. Guided by this unified structure, the methodological landscape is dissected to reveal critical challenges such as fidelity gaps and frontiers including co-evolutionary ecosystems. Ultimately, by systematizing this fragmented field, data generation is positioned as the central engine for next-generation DI.