Enhanced Generative Model Evaluation with Clipped Density and Coverage

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative models are hindered in critical applications by the unreliability of existing sample-quality evaluation metrics—commonly uncalibrated, outlier-sensitive, and lacking interpretable numerical meaning. This paper proposes a robust, interpretable evaluation framework that jointly quantifies fidelity and coverage. Its core innovation is a dual-truncation mechanism: distribution-level truncation based on nearest-neighbor distance-ball radii, and instance-level truncation of individual contributions—both effectively suppressing outliers and out-of-distribution samples. Coupled with analytical and empirical calibration, the metric exhibits linear degradation behavior, ensuring scores directly reflect the proportion of high-fidelity samples. Experiments across synthetic and real-world datasets demonstrate that the proposed metric significantly outperforms state-of-the-art baselines in robustness, sensitivity, and interpretability.

Technology Category

Application Category

📝 Abstract
Although generative models have made remarkable progress in recent years, their use in critical applications has been hindered by their incapacity to reliably evaluate sample quality. Quality refers to at least two complementary concepts: fidelity and coverage. Current quality metrics often lack reliable, interpretable values due to an absence of calibration or insufficient robustness to outliers. To address these shortcomings, we introduce two novel metrics, Clipped Density and Clipped Coverage. By clipping individual sample contributions and, for fidelity, the radii of nearest neighbor balls, our metrics prevent out-of-distribution samples from biasing the aggregated values. Through analytical and empirical calibration, these metrics exhibit linear score degradation as the proportion of poor samples increases. Thus, they can be straightforwardly interpreted as equivalent proportions of good samples. Extensive experiments on synthetic and real-world datasets demonstrate that Clipped Density and Clipped Coverage outperform existing methods in terms of robustness, sensitivity, and interpretability for evaluating generative models.
Problem

Research questions and friction points this paper is trying to address.

Evaluate generative model sample quality reliably
Address fidelity and coverage metric shortcomings
Improve robustness and interpretability of evaluation metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced Clipped Density and Coverage metrics
Clipped sample contributions to reduce bias
Calibrated metrics for linear score degradation