PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper reveals a critical privacy vulnerability in residual neural networks (ResNets): user input data can be inversely reconstructed during inference. Existing inversion methods fail to exploit the intrinsic properties of residual connections, resulting in low-fidelity reconstructions. To address this, we propose PEEL—a novel inversion framework that models residual block outputs as noisy observations of the input and formulates a layer-wise constrained optimization problem. PEEL jointly leverages backward feature inversion and entropy modeling of skip-connection information to enable block-level reversible feature recovery. Theoretical analysis identifies residual connections as the primary leakage channel. Empirical evaluation on facial image datasets demonstrates that PEEL reduces mean squared error (MSE) by an order of magnitude over state-of-the-art methods, strongly validating both its effectiveness and the structural sensitivity of ResNet to inversion attacks.

Technology Category

Application Category

📝 Abstract
This paper explores inference-time data leakage risks of deep neural networks (NNs), where a curious and honest model service provider is interested in retrieving users' private data inputs solely based on the model inference results. Particularly, we revisit residual NNs due to their popularity in computer vision and our hypothesis that residual blocks are a primary cause of data leakage owing to the use of skip connections. By formulating inference-time data leakage as a constrained optimization problem, we propose a novel backward feature inversion method, extbf{PEEL}, which can effectively recover block-wise input features from the intermediate output of residual NNs. The surprising results in high-quality input data recovery can be explained by the intuition that the output from these residual blocks can be considered as a noisy version of the input and thus the output retains sufficient information for input recovery. We demonstrate the effectiveness of our layer-by-layer feature inversion method on facial image datasets and pre-trained classifiers. Our results show that PEEL outperforms the state-of-the-art recovery methods by an order of magnitude when evaluated by mean squared error (MSE). The code is available at href{https://github.com/Huzaifa-Arif/PEEL}{https://github.com/Huzaifa-Arif/PEEL}
Problem

Research questions and friction points this paper is trying to address.

Investigates data leakage risks in residual neural networks
Proposes PEEL method to recover input features
Demonstrates superior performance in input data recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backward feature inversion for data recovery
Layer-by-layer feature inversion method
Optimization-based approach for leakage risks
🔎 Similar Papers
No similar papers found.