A Pragmatic Note on Evaluating Generative Models with Fr'echet Inception Distance for Retinal Image Synthesis

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a systematic misalignment between generic generative evaluation metrics—such as Fréchet Inception Distance (FID)—and downstream task performance (e.g., classification or segmentation) in retinal image synthesis. Method: Through systematic experiments across multimodal retinal datasets (fundus photography and OCT), we empirically analyze the correlation between FID (and its variants) and actual gains in downstream model performance. Contribution/Results: We provide the first empirical evidence that FID scores fail to predict whether synthetic data meaningfully improve downstream model accuracy. To address this, we propose a “task-driven evaluation” paradigm, advocating direct assessment via target downstream task performance—replacing proxy metrics reliant on ImageNet-pretrained features. Our findings are robustly replicated across multiple public retinal image benchmarks, offering both methodological insight and practical guidance for evaluating biomedical image generation.

Technology Category

Application Category

📝 Abstract
Fr'echet Inception Distance (FID), computed with an ImageNet pretrained Inception-v3 network, is widely used as a state-of-the-art evaluation metric for generative models. It assumes that feature vectors from Inception-v3 follow a multivariate Gaussian distribution and calculates the 2-Wasserstein distance based on their means and covariances. While FID effectively measures how closely synthetic data match real data in many image synthesis tasks, the primary goal in biomedical generative models is often to enrich training datasets ideally with corresponding annotations. For this purpose, the gold standard for evaluating generative models is to incorporate synthetic data into downstream task training, such as classification and segmentation, to pragmatically assess its performance. In this paper, we examine cases from retinal imaging modalities, including color fundus photography and optical coherence tomography, where FID and its related metrics misalign with task-specific evaluation goals in classification and segmentation. We highlight the limitations of using various metrics, represented by FID and its variants, as evaluation criteria for these applications and address their potential caveats in broader biomedical imaging modalities and downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating retinal image synthesis models using FID limitations
Misalignment between FID metrics and biomedical task performance
Assessing generative models via downstream tasks in biomedicine
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates generative models using Fréchet Inception Distance
Compares synthetic and real data with Gaussian assumptions
Assesses performance via downstream classification and segmentation
🔎 Similar Papers
No similar papers found.
Yuli Wu
Yuli Wu
RWTH Aachen University
Computer VisionRetinal Prosthesis
F
Fucheng Liu
Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
R
Ruveyda Yilmaz
Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
Henning Konermann
Henning Konermann
PhD Candidate, RWTH Aachen University
Artificial Vision
P
Peter Walter
Department of Ophthalmology, RWTH Aachen University, Germany
Johannes Stegmaier
Johannes Stegmaier
RWTH Aachen University
3D+t Image AnalysisMachine LearningMicroscopyDevelopmental BiologyMedical Image Analysis