Counterfeit Answers: Adversarial Forgery against OCR-Free Document Visual Question Answering

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes severe security vulnerabilities of OCR-free Document Visual Question Answering (DocVQA) models under semantically targeted adversarial forgery. Addressing the limitation of existing attacks—lack of semantic controllability—we propose the first semantic-targeted forgery framework tailored for DocVQA, supporting two attack objectives: *targeted misdirection* (inducing a specific incorrect answer) and *system failure* (globally degrading accuracy). Our method operates end-to-end on vision-language Transformer architectures (e.g., Pix2Struct, Donut), jointly optimizing visual perturbations and linguistic responses to produce semantically harmful yet visually imperceptible document forgeries. Evaluated on mainstream DocVQA benchmarks, the attack achieves high success rates and substantially impairs model performance. This study provides the first systematic empirical validation of security risks in end-to-end document understanding under realistic adversarial forgery, establishing a critical foundation for robustness analysis and defense design.

Technology Category

Application Category

📝 Abstract
Document Visual Question Answering (DocVQA) enables end-to-end reasoning grounded on information present in a document input. While recent models have shown impressive capabilities, they remain vulnerable to adversarial attacks. In this work, we introduce a novel attack scenario that aims to forge document content in a visually imperceptible yet semantically targeted manner, allowing an adversary to induce specific or generally incorrect answers from a DocVQA model. We develop specialized attack algorithms that can produce adversarially forged documents tailored to different attackers' goals, ranging from targeted misinformation to systematic model failure scenarios. We demonstrate the effectiveness of our approach against two end-to-end state-of-the-art models: Pix2Struct, a vision-language transformer that jointly processes image and text through sequence-to-sequence modeling, and Donut, a transformer-based model that directly extracts text and answers questions from document images. Our findings highlight critical vulnerabilities in current DocVQA systems and call for the development of more robust defenses.
Problem

Research questions and friction points this paper is trying to address.

Adversarial attacks forge document content to mislead DocVQA models
Specialized algorithms create forged documents for targeted misinformation or failure
Vulnerabilities are demonstrated in state-of-the-art models like Pix2Struct and Donut
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial forgery attacks on OCR-free DocVQA models
Specialized algorithms for targeted misinformation and model failure
Testing effectiveness against Pix2Struct and Donut models
🔎 Similar Papers
No similar papers found.