🤖 AI Summary
This study addresses the fragmented and unsystematic state of research on adversarial attacks against machine learning models for tabular data. We conduct the first systematic literature review (SLR) specifically targeting this domain. Through cross-study meta-analysis and formal modeling of attack strategies, we propose a multidimensional taxonomy encompassing attack paradigms, realistic constraints—categorized into data-, model-, and scenario-specific limitations—and applicability boundaries. Our analysis identifies critical challenges, including the absence of standardized robustness evaluation metrics and limited transferability of adversarial examples across models or datasets. Furthermore, we pinpoint practical deployment bottlenecks and articulate key open problems. The resulting framework establishes a unified analytical foundation for adversarial robustness research in tabular ML and provides concrete directions for future work, bridging theoretical insights with real-world applicability.
📝 Abstract
Adversarial attacks in machine learning have been extensively reviewed in areas like computer vision and NLP, but research on tabular data remains scattered. This paper provides the first systematic literature review focused on adversarial attacks targeting tabular machine learning models. We highlight key trends, categorize attack strategies and analyze how they address practical considerations for real-world applicability. Additionally, we outline current challenges and open research questions. By offering a clear and structured overview, this review aims to guide future efforts in understanding and addressing adversarial vulnerabilities in tabular machine learning.