Evaluating Generative Models for Tabular Data: Novel Metrics and Benchmarking

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluation metrics for tabular data generation models lack comprehensiveness and reliability, failing to adequately capture generation defects arising from structural complexity, heterogeneous data types, and high distributional variability. To address this, we propose three novel, tabular-data-specific metrics: FAED (Feature-Aware Anomaly Detection), FPCAD (Feature-Per-Class Association Deviation), and RFIS (Resampling-based Fidelity Index Score). FAED systematically identifies previously overlooked anomalies—such as mode collapse and boundary distortion—for the first time; FPCAD quantifies fidelity in feature-to-class dependency preservation; and RFIS enhances robustness in distributional fidelity assessment via resampling-based consistency. Together, they constitute the first systematic benchmarking framework for tabular generative models. Experiments across three network intrusion detection datasets demonstrate that FAED significantly improves detection of anomalous generation patterns, RFIS substantially increases stability in distributional evaluation, and FPCAD exhibits strong discriminative capability.

Technology Category

Application Category

📝 Abstract
Generative models have revolutionized multiple domains, yet their application to tabular data remains underexplored. Evaluating generative models for tabular data presents unique challenges due to structural complexity, large-scale variability, and mixed data types, making it difficult to intuitively capture intricate patterns. Existing evaluation metrics offer only partial insights, lacking a comprehensive measure of generative performance. To address this limitation, we propose three novel evaluation metrics: FAED, FPCAD, and RFIS. Our extensive experimental analysis, conducted on three standard network intrusion detection datasets, compares these metrics with established evaluation methods such as Fidelity, Utility, TSTR, and TRTS. Our results demonstrate that FAED effectively captures generative modeling issues overlooked by existing metrics. While FPCAD exhibits promising performance, further refinements are necessary to enhance its reliability. Our proposed framework provides a robust and practical approach for assessing generative models in tabular data applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating generative models for tabular data lacks comprehensive metrics
Existing metrics fail to capture structural complexity and mixed data types
Proposing novel metrics FAED, FPCAD, RFIS to improve evaluation robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes FAED, FPCAD, RFIS metrics for tabular data
Benchmarks metrics on intrusion detection datasets
FAED captures issues missed by existing metrics
🔎 Similar Papers
No similar papers found.