DocDjinn: Controllable Synthetic Document Generation with VLMs and Handwriting Diffusion

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a controllable synthetic document generation method based on vision-language models (VLMs) to address the scarcity, high cost, and privacy risks associated with acquiring high-quality annotated data for document intelligence. Starting from unlabeled document seeds, the approach leverages clustering-guided seed selection, parametric sampling, and semantic-visual disentanglement, augmented with a handwritten diffusion model to inject context-aware visual elements, thereby producing annotated synthetic documents that are both semantically coherent and visually realistic. Experiments demonstrate that with only 100 real samples, the method achieves 87% of the average performance obtained using full real datasets across 11 benchmark tasks, marking the first demonstration that VLMs can scalably generate synthetic documents faithfully aligned with real-world distributions. The code and a dataset of over 140,000 synthetic samples are publicly released.

Technology Category

Application Category

📝 Abstract
Effective document intelligence models rely on large amounts of annotated training data. However, procuring sufficient and high-quality data poses significant challenges due to the labor-intensive and costly nature of data acquisition. Additionally, leveraging language models to annotate real documents raises concerns about data privacy. Synthetic document generation has emerged as a promising, privacy-preserving alternative. We propose DocDjinn, a novel framework for controllable synthetic document generation using Vision-Language Models (VLMs) that produces annotated documents from unlabeled seed samples. Our approach generates visually plausible and semantically consistent synthetic documents that follow the distribution of an existing source dataset through clustering-based seed selection with parametrized sampling. By enriching documents with realistic diffusion-based handwriting and contextual visual elements via semantic-visual decoupling, we generate diverse, high-quality annotated synthetic documents. We evaluate across eleven benchmarks spanning key information extraction, question answering, document classification, and document layout analysis. To our knowledge, this is the first work demonstrating that VLMs can generate faithful annotated document datasets at scale from unlabeled seeds that can effectively enrich or approximate real, manually annotated data for diverse document understanding tasks. We show that with only 100 real training samples, our framework achieves on average $87\%$ of the performance of the full real-world dataset. We publicly release our code and 140k+ synthetic document samples.
Problem

Research questions and friction points this paper is trying to address.

synthetic document generation
data privacy
annotated training data
document intelligence
Vision-Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic document generation
vision-language models
handwriting diffusion
semantic-visual decoupling
controllable data synthesis
🔎 Similar Papers
No similar papers found.
M
Marcel Lamott
RheinMain University of Applied Sciences, Wiesbaden, Germany
S
Saifullah Saifullah
German Research Center for Artificial Intelligence, Kaiserslautern, Germany
N
Nauman Riaz
German Research Center for Artificial Intelligence, Kaiserslautern, Germany
Y
Yves-Noel Weweler
Insiders Technologies GmbH, Kaiserslautern, Germany
T
Tobias Alt-Veit
Insiders Technologies GmbH, Kaiserslautern, Germany
A
Ahmad Sarmad Ali
National University of Sciences and Technology (NUST), Islamabad, Pakistan
M
Muhammad Armaghan Shakir
National University of Sciences and Technology (NUST), Islamabad, Pakistan
A
Adrian Kalwa
RheinMain University of Applied Sciences, Wiesbaden, Germany
M
Momina Moetesum
National University of Sciences and Technology (NUST), Islamabad, Pakistan
Andreas Dengel
Andreas Dengel
Professor of Computer Science, University of Kaiserslautern & Executive Director, DFKI
Artificial IntelligenceMachine LearningDocument AnalysisSemantic Technologies
Sheraz Ahmed
Sheraz Ahmed
German Research Center for Artificial Intelligence - DFKI GmbH
Faisal Shafait
Faisal Shafait
Professor, National University of Sciences and Technology (NUST)
Document Image AnalysisOCRImage ProcessingComputer VisionMachine Learning
Ulrich Schwanecke
Ulrich Schwanecke
Computer Vision and Graphics, RheinMain University of Applied Sciences
ComputergraphicsComputer VisionMachine Learning
Adrian Ulges
Adrian Ulges
RheinMain University of Applied Sciences
machine learningnatural language processing