🤖 AI Summary
This work proposes a controllable synthetic document generation method based on vision-language models (VLMs) to address the scarcity, high cost, and privacy risks associated with acquiring high-quality annotated data for document intelligence. Starting from unlabeled document seeds, the approach leverages clustering-guided seed selection, parametric sampling, and semantic-visual disentanglement, augmented with a handwritten diffusion model to inject context-aware visual elements, thereby producing annotated synthetic documents that are both semantically coherent and visually realistic. Experiments demonstrate that with only 100 real samples, the method achieves 87% of the average performance obtained using full real datasets across 11 benchmark tasks, marking the first demonstration that VLMs can scalably generate synthetic documents faithfully aligned with real-world distributions. The code and a dataset of over 140,000 synthetic samples are publicly released.
📝 Abstract
Effective document intelligence models rely on large amounts of annotated training data. However, procuring sufficient and high-quality data poses significant challenges due to the labor-intensive and costly nature of data acquisition. Additionally, leveraging language models to annotate real documents raises concerns about data privacy. Synthetic document generation has emerged as a promising, privacy-preserving alternative. We propose DocDjinn, a novel framework for controllable synthetic document generation using Vision-Language Models (VLMs) that produces annotated documents from unlabeled seed samples. Our approach generates visually plausible and semantically consistent synthetic documents that follow the distribution of an existing source dataset through clustering-based seed selection with parametrized sampling. By enriching documents with realistic diffusion-based handwriting and contextual visual elements via semantic-visual decoupling, we generate diverse, high-quality annotated synthetic documents. We evaluate across eleven benchmarks spanning key information extraction, question answering, document classification, and document layout analysis. To our knowledge, this is the first work demonstrating that VLMs can generate faithful annotated document datasets at scale from unlabeled seeds that can effectively enrich or approximate real, manually annotated data for diverse document understanding tasks. We show that with only 100 real training samples, our framework achieves on average $87\%$ of the performance of the full real-world dataset. We publicly release our code and 140k+ synthetic document samples.