π€ AI Summary
In collaborative inference, edge-side features are vulnerable to model inversion attacks (MIAs), leading to leakage of original inputs.
Method: This paper proposes an optimal neural network partitioning strategy leveraging representation transition properties. It first establishes a quantitative relationship among conditional entropy abruptness, intra-class variance $R_c^2$, and feature dimensionality, defining an MIA-resilient βgolden partition zone.β Theoretically, partitioning at the decision layer increases reconstruction error by over 4Γ. The method integrates conditional entropy modeling, transition-point detection, regularized label smoothing, and an enhanced MIA evaluation framework.
Results: Evaluated on three mainstream vision models, partitioning at the transition or decision layer significantly improves MIA robustness; combining it with label smoothing further reduces $R_c^2$, achieving a favorable trade-off between privacy preservation and model generalization.
π Abstract
In collaborative inference, intermediate features transmitted from edge devices can be exploited by adversaries to reconstruct original inputs via model inversion attacks (MIA). While existing defenses focus on shallow-layer protection, they often incur significant utility loss. A key open question is how to partition the edge-cloud model to maximize resistance to MIA while minimizing accuracy degradation. We first show that increasing model depth alone does not guarantee resistance. Through theoretical analysis, we demonstrate that representational transitions in neural networks cause sharp changes in conditional entropy $H(xmid z)$, with intra-class variance (denoted $R_c^2$) and feature dimensionality as critical factors. Experiments on three representative deep vision models demonstrate that splitting at the representational-transition or decision-level layers increases mean squared error by more than four times compared to shallow splits, indicating significantly stronger resistance to MIA. Positive label smoothing further enhances robustness by compressing $R_c^2$ and improving generalization. Finally, we validate the resilience of decision-level features under enhanced inversion models and observe that the type of auxiliary data influences both transition boundaries and reconstruction behavior.