Q-Former Autoencoder: A Modern Framework for Medical Anomaly Detection

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image anomaly detection faces challenges stemming from diverse anomaly types and severe scarcity of annotated data. To address these, we propose an unsupervised framework built upon a Q-Former-based autoencoder. Specifically, we freeze large-scale vision foundation models—such as DINO, DINOv2, or MAE—as multi-stage feature extractors and introduce the Q-Former as a learnable bottleneck module to flexibly fuse features across scales and reconstruct variable-length sequences. A perceptual loss derived from MAE pretraining is incorporated to enhance semantic consistency in reconstruction. The decoder is trained end-to-end without fine-tuning the frozen visual backbone. Our method achieves state-of-the-art performance on four benchmark datasets—BraTS2021, RESC, RSNA, and ISIC2018—demonstrating, for the first time, the strong zero-shot generalization capability of natural-image pretrained vision models for precise anomaly localization in medical imaging without any backbone adaptation.

Technology Category

Application Category

📝 Abstract
Anomaly detection in medical images is an important yet challenging task due to the diversity of possible anomalies and the practical impossibility of collecting comprehensively annotated data sets. In this work, we tackle unsupervised medical anomaly detection proposing a modernized autoencoder-based framework, the Q-Former Autoencoder, that leverages state-of-the-art pretrained vision foundation models, such as DINO, DINOv2 and Masked Autoencoder. Instead of training encoders from scratch, we directly utilize frozen vision foundation models as feature extractors, enabling rich, multi-stage, high-level representations without domain-specific fine-tuning. We propose the usage of the Q-Former architecture as the bottleneck, which enables the control of the length of the reconstruction sequence, while efficiently aggregating multiscale features. Additionally, we incorporate a perceptual loss computed using features from a pretrained Masked Autoencoder, guiding the reconstruction towards semantically meaningful structures. Our framework is evaluated on four diverse medical anomaly detection benchmarks, achieving state-of-the-art results on BraTS2021, RESC, and RSNA. Our results highlight the potential of vision foundation model encoders, pretrained on natural images, to generalize effectively to medical image analysis tasks without further fine-tuning. We release the code and models at https://github.com/emirhanbayar/QFAE.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised anomaly detection in diverse medical images
Leveraging pretrained vision models without fine-tuning
Improving reconstruction with Q-Former and perceptual loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses frozen vision foundation models as feature extractors
Employs Q-Former architecture for efficient feature aggregation
Incorporates perceptual loss from pretrained Masked Autoencoder
🔎 Similar Papers
No similar papers found.
F
Francesco Dalmonte
University of Bologna, Italy
E
Emirhan Bayar
Middle East Technical University, Ankara, Türkiye
Emre Akbas
Emre Akbas
Helmholtz Munich | Middle East Technical University (METU)
computer visiondeep learningmachine learningobject detectionhuman pose estimation
M
Mariana-Iuliana Georgescu
Helmholtz Munich, Germany