🤖 AI Summary
Evaluating multi-view consistency of generated images lacks reference-free, process-agnostic metrics due to the absence of ground-truth 3D supervision. Method: We propose MEt3R—the first 3D-label-free and sampling-process-agnostic metric for quantifying multi-view consistency. It uniquely integrates dense 3D reconstruction (DUSt3R) with feature-level view reprojection, enabling scene-agnostic, view-invariant consistency measurement via end-to-end 3D reconstruction, differentiable cross-view warping, and CLIP/ViTL feature similarity comparison. Results: Comprehensive evaluation across diverse generative models—including our in-house multi-view latent diffusion model—demonstrates that MEt3R achieves strong correlation with human perception and significantly outperforms traditional metrics (e.g., PSNR, SSIM). Crucially, it overcomes the long-standing limitation of conventional 3D evaluation methods that rely on ground-truth 3D annotations.
📝 Abstract
We introduce MEt3R, a metric for multi-view consistency in generated images. Large-scale generative models for multi-view image generation are rapidly advancing the field of 3D inference from sparse observations. However, due to the nature of generative modeling, traditional reconstruction metrics are not suitable to measure the quality of generated outputs and metrics that are independent of the sampling procedure are desperately needed. In this work, we specifically address the aspect of consistency between generated multi-view images, which can be evaluated independently of the specific scene. Our approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a feed-forward manner, which are used to warp image contents from one view into the other. Then, feature maps of these images are compared to obtain a similarity score that is invariant to view-dependent effects. Using MEt3R, we evaluate the consistency of a large set of previous methods for novel view and video generation, including our open, multi-view latent diffusion model.