🤖 AI Summary
To address the vulnerability of graph-structured privacy in cyber-physical system (CPS) simulations to system identification (SI) attacks, this paper proposes a differential privacy mechanism targeting graph filter outputs. By injecting Gaussian noise into node excitations, graph shift operator (GSO) identification is formulated as a noisy inverse problem. We derive an explicit theoretical relationship between the $(varepsilon,delta)$-differential privacy guarantee and both the spectral response of the graph filter and the condition number of the noise covariance matrix. Theoretically, we prove that smoother filters and lower-condition-number covariance matrices significantly enhance privacy protection. Experiments demonstrate that the method effectively prevents GSO inference while preserving the utility of filtered outputs for downstream tasks, enabling a quantifiable privacy–utility trade-off.
📝 Abstract
This paper addresses the problem of protecting network information from privacy system identification (SI) attacks when sharing cyber-physical system simulations. We model analyst observations of networked states as time-series outputs of a graph filter driven by differentially private (DP) nodal excitations, with the analyst aiming to infer the underlying graph shift operator (GSO). Unlike traditional SI, which estimates system parameters, we study the inverse problem: what assumptions prevent adversaries from identifying the GSO while preserving utility for legitimate analysis. We show that applying DP mechanisms to inputs provides formal privacy guarantees for the GSO, linking the $(ε,δ)$-DP bound to the spectral properties of the graph filter and noise covariance. More precisely, for DP Gaussian signals, the spectral characteristics of both the filter and noise covariance determine the privacy bound, with smooth filters and low-condition-number covariance yielding greater privacy.