🤖 AI Summary
This work addresses the limited generalization capability in tampered document image detection caused by data scarcity by proposing a high-fidelity, diverse tampered sample generation method. The approach introduces a similarity-guided generative pipeline featuring two key components: a contrastive learning–based text block similarity assessment network and a character cropping integrity discriminator, along with a novel strategy for constructing positive and negative samples. This framework substantially enhances both the visual quality and diversity of the generated tampered images. Experimental results demonstrate that models trained on the synthesized data consistently achieve significant performance gains across multiple open-source datasets and various architectural configurations.
📝 Abstract
Detecting tampered text in document images is a challenging task due to data scarcity. To address this, previous work has attempted to generate tampered documents using rule-based methods. However, the resulting documents often suffer from limited variety and poor visual quality, typically leaving highly visible artifacts that are rarely observed in real-world manipulations. This undermines the model's ability to learn robust, generalizable features and results in poor performance on real-world data. Motivated by this discrepancy, we propose a novel method for generating high-quality tampered document images. We first train an auxiliary network to compare text crops, leveraging contrastive learning with a novel strategy for defining positive pairs and their corresponding negatives. We also train a second auxiliary network to evaluate whether a crop tightly encloses the intended characters, without cutting off parts of characters or including parts of adjacent ones. Using a carefully designed generation pipeline that leverages both networks, we introduce a framework capable of producing diverse, high-quality tampered document images. We assess the effectiveness of our data generation pipeline by training multiple models on datasets derived from the same source images, generated using our method and existing approaches, under identical training protocols. Evaluating these models on various open-source datasets shows that our pipeline yields consistent performance improvements across architectures and datasets.