FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing foundation models for tabular anomaly detection lack interpretability, hindering their applicability in safety-critical decision-making scenarios. To address this limitation, this work proposes the FoMo-X framework, which introduces a lightweight diagnostic head atop a frozen PFN backbone, thereby embedding modular diagnostic signals—such as anomaly severity grading and uncertainty quantification—directly into the zero-shot anomaly detection pipeline for the first time. Leveraging generative simulation priors during training, FoMo-X distills computationally expensive uncertainty estimation techniques like Monte Carlo Dropout into a single forward pass. This approach efficiently reproduces authentic diagnostic signals on benchmarks such as ADBench with negligible inference overhead, significantly enhancing the trustworthiness, real-time performance, and operational utility of zero-shot anomaly detection.

Technology Category

Application Category

📝 Abstract
Tabular foundation models, specifically Prior-Data Fitted Networks (PFNs), have revolutionized outlier detection (OD) by enabling unsupervised zero-shot adaptation to new datasets without training. However, despite their predictive power, these models typically function as opaque black boxes, outputting scalar outlier scores that lack the operational context required for safety-critical decision-making. Existing post-hoc explanation methods are often computationally prohibitive for real-time deployment or fail to capture the epistemic uncertainty inherent in zero-shot inference. In this work, we introduce FoMo-X, a modular framework that equips OD foundation models with intrinsic, lightweight diagnostic capabilities. We leverage the insight that the frozen embeddings of a pretrained PFN backbone already encode rich, context-conditioned relational information. FoMo-X attaches auxiliary diagnostic heads to these embeddings, trained offline using the same generative simulator prior as the backbone. This allows us to distill computationally expensive properties, such as Monte Carlo dropout based epistemic uncertainty, into a deterministic, single-pass inference. We instantiate FoMo-X with two novel heads: a Severity Head that discretizes deviations into interpretable risk tiers, and an Uncertainty Head that provides calibrated confidence measures. Extensive evaluation on synthetic and real-world benchmarks (ADBench) demonstrates that FoMo-X recovers ground-truth diagnostic signals with high fidelity and negligible inference overhead. By bridging the gap between foundation model performance and operational explainability, FoMo-X offers a scalable path toward trustworthy, zero-shot outlier detection.
Problem

Research questions and friction points this paper is trying to address.

outlier detection
foundation models
explainability
epistemic uncertainty
zero-shot inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

modular explainability
foundation models
zero-shot outlier detection
epistemic uncertainty
diagnostic heads
🔎 Similar Papers
Simon Klüttermann
Simon Klüttermann
Phd Student, Computer science, TU Dortmund
anomaly detectionensemble learningmachine learning
T
Tim Katzke
TU Dortmund University, Dortmund, Germany; Research Center Trustworthy Data Science and Security, University Alliance Ruhr, Dortmund, Germany
P
Phuong Huong Nguyen
TU Dortmund University, Dortmund, Germany
Emmanuel Müller
Emmanuel Müller
Professor of Computer Science, Technical University of Dortmund
Data MiningMachine LearningData ExplorationDatabases