🤖 AI Summary
Multimodal large reasoning models (MLRMs) pose severe geographic privacy risks by inferring precise locations from images via hierarchical chain-of-thought reasoning; existing perception-oriented defenses fail to disrupt their multi-step conceptual inference. This paper proposes ReasonBreak, a concept-aware adversarial perturbation framework that introduces the first privacy defense paradigm explicitly designed for hierarchical reasoning chains: it applies concept-level aligned perturbations to precisely sever dependencies on critical environmental cues along the reasoning path. To support rigorous evaluation, we construct GeoPrivacy-6K—the first hierarchical geographic privacy benchmark—comprising 6,341 high-resolution images with fine-grained geolocation annotations. Extensive evaluation across seven state-of-the-art MLRMs demonstrates that ReasonBreak reduces block-level localization accuracy by 14.4 percentage points (from 33.8% to 19.4%) and nearly doubles neighborhood-level protection rate (from 16.8% to 33.5%).
📝 Abstract
Multi-modal large reasoning models (MLRMs) pose significant privacy risks by inferring precise geographic locations from personal images through hierarchical chain-of-thought reasoning. Existing privacy protection techniques, primarily designed for perception-based models, prove ineffective against MLRMs' sophisticated multi-step reasoning processes that analyze environmental cues. We introduce extbf{ReasonBreak}, a novel adversarial framework specifically designed to disrupt hierarchical reasoning in MLRMs through concept-aware perturbations. Our approach is founded on the key insight that effective disruption of geographic reasoning requires perturbations aligned with conceptual hierarchies rather than uniform noise. ReasonBreak strategically targets critical conceptual dependencies within reasoning chains, generating perturbations that invalidate specific inference steps and cascade through subsequent reasoning stages. To facilitate this approach, we contribute extbf{GeoPrivacy-6K}, a comprehensive dataset comprising 6,341 ultra-high-resolution images ($geq$2K) with hierarchical concept annotations. Extensive evaluation across seven state-of-the-art MLRMs (including GPT-o3, GPT-5, Gemini 2.5 Pro) demonstrates ReasonBreak's superior effectiveness, achieving a 14.4% improvement in tract-level protection (33.8% vs 19.4%) and nearly doubling block-level protection (33.5% vs 16.8%). This work establishes a new paradigm for privacy protection against reasoning-based threats.