TACTIC for Navigating the Unknown: Tabular Anomaly deteCTion via In-Context inference

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of unsupervised anomaly detection in tabular data under noisy conditions and high computational costs by proposing TACTIC, an in-context anomaly detection method grounded in synthetic anomaly-centric priors. TACTIC introduces, for the first time, an anomaly-center prior into an in-context learning framework to build a foundational model tailored for anomaly detection. It enables direct, explicit anomaly decisions through discriminative prediction in a single forward pass, eliminating the need for post-processing or dataset-specific hyperparameter tuning. Leveraging a Transformer-based architecture pretrained with synthetically generated anomalies, the model achieves rapid, data-adaptive inference. Extensive experiments demonstrate that TACTIC matches or surpasses specialized methods across diverse real-world datasets, consistently delivering robust performance under varying anomaly types, contamination levels, and anomaly ratios.

Technology Category

Application Category

📝 Abstract
Anomaly detection for tabular data has been a long-standing unsupervised learning problem that remains a major challenge for current deep learning models. Recently, in-context learning has emerged as a new paradigm that has shifted efforts from task-specific optimization to large-scale pretraining aimed at creating foundation models that generalize across diverse datasets. Although in-context models, such as TabPFN, perform well in supervised problems, their learned classification-based priors may not readily extend to anomaly detection. In this paper, we study in-context models for anomaly detection and show that the unsupervised extensions to TabPFN exhibit unstable behavior, particularly in noisy or contaminated contexts, in addition to the high computational cost. We address these challenges and introduce TACTIC, an in-context anomaly detection approach based on pretraining with anomaly-centric synthetic priors, which provides fast and data-dependent reasoning about anomalies while avoiding dataset-specific tuning. In contrast to typical score-based approaches, which produce uncalibrated anomaly scores that require post-processing (e.g. threshold selection or ranking heuristics), the proposed model is trained as a discriminative predictor, enabling unambiguous anomaly decisions in a single forward pass. Through experiments on real-world datasets, we examine the performance of TACTIC in clean and noisy contexts with varying anomaly rates and different anomaly types, as well as the impact of prior choices on detection quality. Our experiments clearly show that specialized anomaly-centric in-context models such as TACTIC are highly competitive compared to other task-specific methods.
Problem

Research questions and friction points this paper is trying to address.

anomaly detection
tabular data
in-context learning
unsupervised learning
foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
tabular anomaly detection
anomaly-centric priors
discriminative predictor
foundation models
🔎 Similar Papers
No similar papers found.