DocMIA: Document-Level Membership Inference Attacks against DocVQA Models

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic evaluation of privacy risks in Document Visual Question Answering (DocVQA) models under membership inference attacks. Addressing fine-grained, document-level leakage, we propose unsupervised white-box and black-box membership inference attacks tailored to DocVQA—requiring neither auxiliary datasets nor ground-truth labels, and leveraging only model output confidence distributions and multi-answer consistency for statistical inference, thus aligning with multimodal document understanding architectures. Our approach eliminates reliance on supervised signals or surrogate data, better reflecting realistic threat models. Extensive experiments across state-of-the-art DocVQA models (e.g., LayoutLMv3, Donut) and benchmarks (DocVQA, CORD) demonstrate up to a 12.7% improvement in attack accuracy over prior art, establishing that DocVQA models suffer from severe training data leakage.

Technology Category

Application Category

📝 Abstract
Document Visual Question Answering (DocVQA) has introduced a new paradigm for end-to-end document understanding, and quickly became one of the standard benchmarks for multimodal LLMs. Automating document processing workflows, driven by DocVQA models, presents significant potential for many business sectors. However, documents tend to contain highly sensitive information, raising concerns about privacy risks associated with training such DocVQA models. One significant privacy vulnerability, exploited by the membership inference attack, is the possibility for an adversary to determine if a particular record was part of the model's training data. In this paper, we introduce two novel membership inference attacks tailored specifically to DocVQA models. These attacks are designed for two different adversarial scenarios: a white-box setting, where the attacker has full access to the model architecture and parameters, and a black-box setting, where only the model's outputs are available. Notably, our attacks assume the adversary lacks access to auxiliary datasets, which is more realistic in practice but also more challenging. Our unsupervised methods outperform existing state-of-the-art membership inference attacks across a variety of DocVQA models and datasets, demonstrating their effectiveness and highlighting the privacy risks in this domain.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy risks in DocVQA models.
Introduces novel membership inference attacks.
Evaluates attacks in white-box and black-box scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised membership inference attacks
Tailored for DocVQA models
Effective in white-box and black-box settings
🔎 Similar Papers
No similar papers found.