🤖 AI Summary
Existing vision-language models (VLMs) lack systematic evaluation benchmarks, high-quality training data, and fine-grained reasoning capabilities for assessing graphic design aesthetics. To address this gap, this work introduces AesEval-Bench—the first multidimensional benchmark tailored for graphic design aesthetics—encompassing four dimensions, twelve metrics, and three quantifiable tasks. Furthermore, the authors propose a human-guided, indicator-driven region alignment annotation mechanism to generate large-scale, fine-grained training data. Experimental results demonstrate that the proposed framework significantly enhances VLM performance on aesthetic judgment, region selection, and precise localization tasks, while also revealing inherent limitations of current models in comprehending complex aesthetic principles.
📝 Abstract
Assessing the aesthetic quality of graphic design is central to visual communication, yet remains underexplored in vision language models (VLMs). We investigate whether VLMs can evaluate design aesthetics in ways comparable to humans. Prior work faces three key limitations: benchmarks restricted to narrow principles and coarse evaluation protocols, a lack of systematic VLM comparisons, and limited training data for model improvement. In this work, we introduce AesEval-Bench, a comprehensive benchmark spanning four dimensions, twelve indicators, and three fully quantifiable tasks: aesthetic judgment, region selection, and precise localization. Then, we systematically evaluate proprietary, open-source, and reasoning-augmented VLMs, revealing clear performance gaps against the nuanced demands of aesthetic assessment. Moreover, we construct a training dataset to fine-tune VLMs for this domain, leveraging human-guided VLM labeling to produce task labels at scale and indicator-grounded reasoning to tie abstract indicators to concrete design regions.Together, our work establishes the first systematic framework for aesthetic quality assessment in graphic design. Our code and dataset will be released at: \href{https://github.com/arctanxarc/AesEval-Bench}{https://github.com/arctanxarc/AesEval-Bench}