TabAttackBench: A Benchmark for Adversarial Attacks on Tabular Data

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial attack research on tabular data overemphasizes effectiveness while neglecting imperceptibility—a critical constraint—especially given tabular data’s inherent heterogeneity and complex feature dependencies. Method: We introduce the first benchmark jointly optimizing attack effectiveness and domain-adaptive imperceptibility. It systematically evaluates five attacks across 11 heterogeneous tabular datasets and four model classes. We formally define “imperceptibility” as a core evaluation dimension and propose a standardized framework featuring feature-sensitivity-weighted perturbations, semantic-constrained projection, model-agnostic querying, and multidimensional metrics—including attack success rate, L₀/L₁ perturbation norms, and business-level plausibility. Contribution/Results: Our analysis reveals significant differences in vulnerability and imperceptibility between numerical and mixed-type data. The benchmark yields reproducible robustness rankings for seven mainstream tabular models and identifies concrete improvement pathways for enhancing both security and fidelity.

Technology Category

Application Category

📝 Abstract
Adversarial attacks pose a significant threat to machine learning models by inducing incorrect predictions through imperceptible perturbations to input data. While these attacks have been extensively studied in unstructured data like images, their application to tabular data presents new challenges. These challenges arise from the inherent heterogeneity and complex feature interdependencies in tabular data, which differ significantly from those in image data. To address these differences, it is crucial to consider imperceptibility as a key criterion specific to tabular data. Most current research focuses primarily on achieving effective adversarial attacks, often overlooking the importance of maintaining imperceptibility. To address this gap, we propose a new benchmark for adversarial attacks on tabular data that evaluates both effectiveness and imperceptibility. In this study, we assess the effectiveness and imperceptibility of five adversarial attacks across four models using eleven tabular datasets, including both mixed and numerical-only datasets. Our analysis explores how these factors interact and influence the overall performance of the attacks. We also compare the results across different dataset types to understand the broader implications of these findings. The findings from this benchmark provide valuable insights for improving the design of adversarial attack algorithms, thereby advancing the field of adversarial machine learning on tabular data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial attacks on tabular data effectiveness and imperceptibility
Addressing challenges from tabular data heterogeneity and feature interdependencies
Providing benchmark insights for improving adversarial attack algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

New benchmark for tabular data attacks
Evaluates effectiveness and imperceptibility together
Tests five attacks on eleven datasets
🔎 Similar Papers
No similar papers found.
Z
Zhipeng He
School of Information Systems, Queensland University of Technology, Brisbane, Australia; Center for Data Science, Queensland University of Technology, Brisbane, Australia
Chun Ouyang
Chun Ouyang
Associate Professor, PhD, Queensland University of Technology
Process MiningExplainable AIPredictive AnalyticsAI RobustnessMachine Learning
L
Lijie Wen
School of Software, Tsinghua University, Beijing, China
C
Cong Liu
NOVA Information Management School, NOVA University of Lisbon, Lisboa, Portugal
Catarina Moreira
Catarina Moreira
Associate Professor in Machine Learning @Data Science Institute, UTS
Explainable-AIHuman-Centered AIDeep LearningProbabilistic ModelsQuantum Cognition