A Unified Agentic Framework for Evaluating Conditional Image Generation

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conditional image generation lacks task-agnostic, reliable, and interpretable automated evaluation metrics. Method: We propose CIGEval—the first unified evaluation framework powered by large language model (LLM)-based agents. It fine-tunes compact models to autonomously select tools and perform multi-step reasoning via synthetically generated evaluation trajectories, enabling fine-grained, traceable assessment. The framework integrates large multimodal models (LMMs) with a versatile toolkit to ensure cross-task consistency. Contribution/Results: CIGEval instantiated with GPT-4o achieves a Pearson correlation of 0.4625 with human judgments—nearly matching inter-annotator agreement (0.47). A lightweight 7B open-source LMM, fine-tuned on only 2.3K synthetic trajectories, surpasses prior state-of-the-art methods. To our knowledge, this is the first work to deliver high-fidelity, interpretable, and computationally efficient universal evaluation across diverse conditional image generation tasks.

Technology Category

Application Category

📝 Abstract
Conditional image generation has gained significant attention for its ability to personalize content. However, the field faces challenges in developing task-agnostic, reliable, and explainable evaluation metrics. This paper introduces CIGEval, a unified agentic framework for comprehensive evaluation of conditional image generation tasks. CIGEval utilizes large multimodal models (LMMs) as its core, integrating a multi-functional toolbox and establishing a fine-grained evaluation framework. Additionally, we synthesize evaluation trajectories for fine-tuning, empowering smaller LMMs to autonomously select appropriate tools and conduct nuanced analyses based on tool outputs. Experiments across seven prominent conditional image generation tasks demonstrate that CIGEval (GPT-4o version) achieves a high correlation of 0.4625 with human assessments, closely matching the inter-annotator correlation of 0.47. Moreover, when implemented with 7B open-source LMMs using only 2.3K training trajectories, CIGEval surpasses the previous GPT-4o-based state-of-the-art method. Case studies on GPT-4o image generation highlight CIGEval's capability in identifying subtle issues related to subject consistency and adherence to control guidance, indicating its great potential for automating evaluation of image generation tasks with human-level reliability.
Problem

Research questions and friction points this paper is trying to address.

Develop task-agnostic evaluation metrics for conditional image generation
Create reliable and explainable assessment for personalized content generation
Automate human-level evaluation of image generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large multimodal models for evaluation
Integrates multi-functional toolbox for analysis
Synthesizes trajectories to fine-tune smaller models
🔎 Similar Papers
No similar papers found.