Golden Partition Zone: Rethinking Neural Network Partitioning Under Inversion Threats in Collaborative Inference

πŸ“… 2025-06-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In collaborative inference, edge-side features are vulnerable to model inversion attacks (MIAs), leading to leakage of original inputs. Method: This paper proposes an optimal neural network partitioning strategy leveraging representation transition properties. It first establishes a quantitative relationship among conditional entropy abruptness, intra-class variance $R_c^2$, and feature dimensionality, defining an MIA-resilient β€œgolden partition zone.” Theoretically, partitioning at the decision layer increases reconstruction error by over 4Γ—. The method integrates conditional entropy modeling, transition-point detection, regularized label smoothing, and an enhanced MIA evaluation framework. Results: Evaluated on three mainstream vision models, partitioning at the transition or decision layer significantly improves MIA robustness; combining it with label smoothing further reduces $R_c^2$, achieving a favorable trade-off between privacy preservation and model generalization.

Technology Category

Application Category

πŸ“ Abstract
In collaborative inference, intermediate features transmitted from edge devices can be exploited by adversaries to reconstruct original inputs via model inversion attacks (MIA). While existing defenses focus on shallow-layer protection, they often incur significant utility loss. A key open question is how to partition the edge-cloud model to maximize resistance to MIA while minimizing accuracy degradation. We first show that increasing model depth alone does not guarantee resistance. Through theoretical analysis, we demonstrate that representational transitions in neural networks cause sharp changes in conditional entropy $H(xmid z)$, with intra-class variance (denoted $R_c^2$) and feature dimensionality as critical factors. Experiments on three representative deep vision models demonstrate that splitting at the representational-transition or decision-level layers increases mean squared error by more than four times compared to shallow splits, indicating significantly stronger resistance to MIA. Positive label smoothing further enhances robustness by compressing $R_c^2$ and improving generalization. Finally, we validate the resilience of decision-level features under enhanced inversion models and observe that the type of auxiliary data influences both transition boundaries and reconstruction behavior.
Problem

Research questions and friction points this paper is trying to address.

How to partition edge-cloud models to resist inversion attacks
Impact of representational transitions on model inversion resistance
Enhancing robustness with label smoothing and auxiliary data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partition model at representational-transition layers
Use positive label smoothing for robustness
Analyze conditional entropy for MIA resistance
πŸ”Ž Similar Papers
No similar papers found.