DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data

๐Ÿ“… 2025-03-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Detecting backdoors in third-party deep models deployed in safety-critical systems remains challenging under strict black-box constraintsโ€”i.e., only forward inference access is available, with no training data, model gradients, or fine-tuning capability. Method: This paper proposes a deduction-based trigger inversion method that requires neither fine-tuning, nor training samples, nor gradient information. Its core innovation is a differentiable trigger search framework built upon smoothed attack success rate estimation, synergizing forward-propagation analysis with template-based attack modeling to enable efficient exploration of the trigger space under severely restricted access. Results: Extensive experiments across diverse attack types, model architectures, and datasets demonstrate near-perfect detection accuracy (~100%), substantially outperforming existing state-of-the-art methods. To our knowledge, this is the first approach achieving highly robust backdoor detection in a zero-data, purely forward black-box setting.

Technology Category

Application Category

๐Ÿ“ Abstract
Backdoor attacks are among the most effective, practical, and stealthy attacks in deep learning. In this paper, we consider a practical scenario where a developer obtains a deep model from a third party and uses it as part of a safety-critical system. The developer wants to inspect the model for potential backdoors prior to system deployment. We find that most existing detection techniques make assumptions that are not applicable to this scenario. In this paper, we present a novel framework for detecting backdoors under realistic restrictions. We generate candidate triggers by deductively searching over the space of possible triggers. We construct and optimize a smoothed version of Attack Success Rate as our search objective. Starting from a broad class of template attacks and just using the forward pass of a deep model, we reverse engineer the backdoor attack. We conduct extensive evaluation on a wide range of attacks, models, and datasets, with our technique performing almost perfectly across these settings.
Problem

Research questions and friction points this paper is trying to address.

Detects backdoor attacks in deep models with limited data
Identifies potential backdoors in third-party models pre-deployment
Uses deductive trigger search to reverse engineer attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deductive search for candidate triggers
Optimize smoothed Attack Success Rate
Reverse engineer backdoor via forward pass
Dorde Popovic
Dorde Popovic
Qatar Computing Research Institute, HBKU
Neural Trojan BackdoorsAdversarial LearningRobust Machine LearningFederated LearningEthics of Artificial Intelligence
A
Amin Sadeghi
Qatar Computing Research Institute, Hamad Bin Khalifa University
T
Ting Yu
Mohamed bin Zayed University of Artificial Intelligence
S
Sanjay Chawla
Qatar Computing Research Institute, Hamad Bin Khalifa University
Issa Khalil
Issa Khalil
Research Director - Qatar Computing Research Institute (QCRI)
Network SecuritySecurity Data AnalyticsPrivate Data Sharing