InkDrop: Invisible Backdoor Attacks Against Dataset Condensation

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel invisible backdoor attack for dataset condensation that overcomes the limited stealth of existing approaches. By selecting samples near the model’s decision boundary and leveraging semantic affinity analysis together with perceptual–spatial consistency constraints, the method generates instance-dependent perturbations. Notably, it is the first to incorporate both perceptual and spatial consistency into perturbation generation, thereby significantly enhancing attack stealth and effectiveness while preserving the model’s performance on the primary task. Experimental results demonstrate that the proposed approach successfully embeds imperceptible backdoors across multiple datasets, achieving high attack success rates with negligible degradation in main-task accuracy.
📝 Abstract
Dataset Condensation (DC) is a data-efficient learning paradigm that synthesizes small yet informative datasets, enabling models to match the performance of full-data training. However, recent work exposes a critical vulnerability of DC to backdoor attacks, where malicious patterns (\textit{e.g.}, triggers) are implanted into the condensation dataset, inducing targeted misclassification on specific inputs. Existing attacks always prioritize attack effectiveness and model utility, overlooking the crucial dimension of stealthiness. To bridge this gap, we propose InkDrop, which enhances the imperceptibility of malicious manipulation without degrading attack effectiveness and model utility. InkDrop leverages the inherent uncertainty near model decision boundaries, where minor input perturbations can induce semantic shifts, to construct a stealthy and effective backdoor attack. Specifically, InkDrop first selects candidate samples near the target decision boundary that exhibit latent semantic affinity to the target class. It then learns instance-dependent perturbations constrained by perceptual and spatial consistency, embedding targeted malicious behavior into the condensed dataset. Extensive experiments across diverse datasets validate the overall effectiveness of InkDrop, demonstrating its ability to integrate adversarial intent into condensed datasets while preserving model utility and minimizing detectability. Our code is available at https://github.com/lvdongyi/InkDrop.
Problem

Research questions and friction points this paper is trying to address.

backdoor attack
dataset condensation
stealthiness
imperceptibility
data-efficient learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset Condensation
Backdoor Attack
Stealthiness
Decision Boundary
Perceptual Consistency
🔎 Similar Papers
No similar papers found.