Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical issue that large vision-language models in document question answering often violate user-defined multimodal non-disclosure policies during complex reasoning, leading to unintended leakage of sensitive information. To this end, the study first identifies and formalizes the problem of “reasoning-induced security vulnerabilities,” introduces Doc-PP—a novel benchmark based on real-world heterogeneous documents—to evaluate model adherence to explicit disclosure policies, and proposes a Decompose-Verify-Aggregate (DVA) structured reasoning framework that decouples reasoning from policy verification. The DVA approach significantly outperforms conventional prompt-based defenses, effectively preventing information leakage while preserving the model’s capacity for complex multimodal reasoning, thereby establishing a new paradigm and strong baseline for policy-compliant document understanding.

Technology Category

Application Category

📝 Abstract
The deployment of Large Vision-Language Models (LVLMs) for real-world document question answering is often constrained by dynamic, user-defined policies that dictate information disclosure based on context. While ensuring adherence to these explicit constraints is critical, existing safety research primarily focuses on implicit social norms or text-only settings, overlooking the complexities of multimodal documents. In this paper, we introduce Doc-PP (Document Policy Preservation Benchmark), a novel benchmark constructed from real-world reports requiring reasoning across heterogeneous visual and textual elements under strict non-disclosure policies. Our evaluation highlights a systemic Reasoning-Induced Safety Gap: models frequently leak sensitive information when answers must be inferred through complex synthesis or aggregated across modalities, effectively circumventing existing safety constraints. Furthermore, we identify that providing extracted text improves perception but inadvertently facilitates leakage. To address these vulnerabilities, we propose DVA (Decompose-Verify-Aggregation), a structural inference framework that decouples reasoning from policy verification. Experimental results demonstrate that DVA significantly outperforms standard prompting defenses, offering a robust baseline for policy-compliant document understanding
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
Document Question Answering
Information Disclosure Policies
Multimodal Reasoning
Safety Gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Document Policy Preservation
Reasoning-Induced Safety Gap
Multimodal Reasoning
Decompose-Verify-Aggregation
Large Vision-Language Models
🔎 Similar Papers
No similar papers found.
H
Haeun Jang
Chung-Ang University, Seoul, Korea
H
Hwan Chang
Chung-Ang University, Seoul, Korea
Hwanhee Lee
Hwanhee Lee
Assistant Professor, Department of Artificial Intelligence, Chung-Ang University
Natural Language ProcessingTrustworthy LLMLLM Safety