🤖 AI Summary
Existing automated evaluation methods for generative content lack a systematic, cross-modal framework.
Method: This paper conducts a large-scale literature review and cross-modal comparative analysis to establish, for the first time, a unified evaluation taxonomy covering text, image, and speech modalities. It identifies five fundamental evaluation paradigms and empirically validates their consistent applicability across three representative generative tasks. Furthermore, it introduces a comparability analysis framework to construct a structured knowledge graph that clarifies capability boundaries and limitations of existing methods per modality.
Contributions/Results: (1) The first cross-modal unified classification system for generative evaluation; (2) abstraction of generalizable, transferable evaluation paradigms; and (3) a theoretical foundation and practical methodology for cross-modal consistent evaluation and joint metric design. This work bridges critical gaps in evaluating multimodal generative models and enables principled, interoperable assessment across modalities.
📝 Abstract
Recent advances in deep learning have significantly enhanced generative AI capabilities across text, images, and audio. However, automatically evaluating the quality of these generated outputs presents ongoing challenges. Although numerous automatic evaluation methods exist, current research lacks a systematic framework that comprehensively organizes these methods across text, visual, and audio modalities. To address this issue, we present a comprehensive review and a unified taxonomy of automatic evaluation methods for generated content across all three modalities; We identify five fundamental paradigms that characterize existing evaluation approaches across these domains. Our analysis begins by examining evaluation methods for text generation, where techniques are most mature. We then extend this framework to image and audio generation, demonstrating its broad applicability. Finally, we discuss promising directions for future research in cross-modal evaluation methodologies.