🤖 AI Summary
Sensor systems for sequential estimation/regression are vulnerable to data attacks yet lack effective, model-agnostic defenses.
Method: This paper proposes the first fully data-driven, prior-free AI-based defense framework, featuring an adaptive sensor data purification mechanism designed to withstand strong (including white-box) attacks. It employs a two-tier architecture that distinguishes whether the attacker knows the defense logic and achieves robust learning using only clean (unattacked) training data. The framework integrates worst-case attack modeling, robust sequential estimation, anomaly detection, and adaptive filtering.
Results: Theoretical analysis and experiments demonstrate that the baseline scheme achieves near-optimal performance—matching an ideal oracle with known attack sources (error gap ≤ 0.01). The enhanced variant significantly mitigates performance degradation under white-box attacks, approaching theoretical optimality.
📝 Abstract
Sensor systems are extremely popular today and vulnerable to sensor data attacks. Due to possible devastating consequences, counteracting sensor data attacks is an extremely important topic, which has not seen sufficient study. This paper develops the first methods that accurately identify/eliminate only the problematic attacked sensor data presented to a sequence estimation/regression algorithm under a powerful attack model constructed based on known/observed attacks. The approach does not assume a known form for the statistical model of the sensor data, allowing data-driven and machine learning sequence estimation/regression algorithms to be protected. A simple protection approach for attackers not endowed with knowledge of the details of our protection approach is first developed, followed by additional processing for attacks based on protection system knowledge. In the cases tested for which it was designed, experimental results show that the simple approach achieves performance indistinguishable, to two decimal places, from that for an approach which knows which sensors are attacked. For cases where the attacker has knowledge of the protection approach, experimental results indicate the additional processing can be configured so that the worst-case degradation under the additional processing and a large number of sensors attacked can be made significantly smaller than the worst-case degradation of the simple approach, and close to an approach which knows which sensors are attacked, for the same number of attacked sensors with just a slight degradation under no attacks. Mathematical descriptions of the worst-case attacks are used to demonstrate the additional processing will provide similar advantages for cases for which we do not have numerical results. All the data-driven processing used in our approaches employ only unattacked training data.