Kill it with FIRE: On Leveraging Latent Space Directions for Runtime Backdoor Mitigation in Deep Neural Networks

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing defense methods struggle to efficiently mitigate backdoor attacks against deployed deep neural networks during inference. To address this challenge, this work proposes FIRE, a novel runtime defense that, for the first time, analyzes the model’s latent space to identify and counteract the feature directions introduced by backdoors. By reversing the influence of these malicious directions, FIRE neutralizes trigger effects and restores correct predictions on poisoned inputs—without modifying the model architecture or input data. Relying solely on internal representation analysis and on-the-fly feature correction, FIRE achieves substantial improvements over current runtime backdoor mitigation techniques across diverse image benchmarks, attack variants, and network architectures, all while incurring minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Machine learning models are increasingly present in our everyday lives; as a result, they become targets of adversarial attackers seeking to manipulate the systems we interact with. A well-known vulnerability is a backdoor introduced into a neural network by poisoned training data or a malicious training process. Backdoors can be used to induce unwanted behavior by including a certain trigger in the input. Existing mitigations filter training data, modify the model, or perform expensive input modifications on samples. If a vulnerable model has already been deployed, however, those strategies are either ineffective or inefficient. To address this gap, we propose our inference-time backdoor mitigation approach called FIRE (Feature-space Inference-time REpair). We hypothesize that a trigger induces structured and repeatable changes in the model's internal representation. We view the trigger as directions in the latent spaces between layers that can be applied in reverse to correct the inference mechanism. Therefore, we turn the backdoored model against itself by manipulating its latent representations and moving a poisoned sample's features along the backdoor directions to neutralize the trigger. Our evaluation shows that FIRE has low computational overhead and outperforms current runtime mitigations on image benchmarks across various attacks, datasets, and network architectures.
Problem

Research questions and friction points this paper is trying to address.

backdoor
deep neural networks
runtime mitigation
latent space
inference-time
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor mitigation
latent space manipulation
inference-time defense
feature-space repair
neural network security
🔎 Similar Papers
No similar papers found.