🤖 AI Summary
To address the challenge of jointly optimizing compression efficiency, reconstruction fidelity, and anomaly signal distinguishability in surveillance data transmission, this paper proposes the first rate-distortion-discriminability (R-D-Disc) triadic trade-off framework. Unlike conventional rate-distortion (R-D) optimization, our approach establishes a unified performance model grounded in information theory and Gaussian assumptions, and employs Pareto frontier analysis to rigorously characterize the intrinsic trade-offs among the three objectives—guiding the design of discriminability-aware compression strategies. Experimental results demonstrate that, at identical bitrates, the proposed method significantly improves anomaly detection accuracy over R-D-only baselines, effectively mitigating discriminability loss induced by compression. This work provides both theoretical foundations and practical solutions for cloud-based anomaly detection under resource constraints.
📝 Abstract
Extensive monitoring systems generate data that is usually compressed for network transmission. This compressed data might then be processed in the cloud for tasks such as anomaly detection. However, compression can potentially impair the detector's ability to distinguish between regular and irregular patterns due to information loss. Here we extend the information-theoretic framework introduced in [1] to simultaneously address the trade-off between the three features on which the effectiveness of the system depends: the effectiveness of compression, the amount of distortion it introduces, and the distinguishability between compressed normal signals and compressed anomalous signals. We leverage a Gaussian assumption to draw curves showing how moving on a Pareto surface helps administer such a trade-off better than simply relying on optimal rate-distortion compression and hoping that compressed signals can be distinguished from each other.