SlidesGen-Bench: Evaluating Slides Generation via Computational and Quantitative Metrics

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for slide generation systems lack cross-architectural comparability and often rely on subjective or uncalibrated judgments. To address this gap, this work proposes the first benchmark that simultaneously ensures generality, quantifiability, and reliability by treating generated outputs as visual renderings and enabling end-to-end quantitative assessment across three dimensions: content, aesthetics, and editability. The approach is agnostic to the underlying generation model and integrates computational visual analysis with multidimensional metrics. Furthermore, it introduces Slides-Align1.5k, a dataset aligned with human preferences. Experiments across nine representative systems and seven distinct scenarios demonstrate that the proposed benchmark achieves significantly higher correlation with human judgments than existing evaluation protocols.

Technology Category

Application Category

📝 Abstract
The rapid evolution of Large Language Models (LLMs) has fostered diverse paradigms for automated slide generation, ranging from code-driven layouts to image-centric synthesis. However, evaluating these heterogeneous systems remains challenging, as existing protocols often struggle to provide comparable scores across architectures or rely on uncalibrated judgments. In this paper, we introduce SlidesGen-Bench, a benchmark designed to evaluate slide generation through a lens of three core principles: universality, quantification, and reliability. First, to establish a unified evaluation framework, we ground our analysis in the visual domain, treating terminal outputs as renderings to remain agnostic to the underlying generation method. Second, we propose a computational approach that quantitatively assesses slides across three distinct dimensions - Content, Aesthetics, and Editability - offering reproducible metrics where prior works relied on subjective or reference-dependent proxies. Finally, to ensure high correlation with human preference, we construct the Slides-Align1.5k dataset, a human preference aligned dataset covering slides from nine mainstream generation systems across seven scenarios. Our experiments demonstrate that SlidesGen-Bench achieves a higher degree of alignment with human judgment than existing evaluation pipelines. Our code and data are available at https://github.com/YunqiaoYang/SlidesGen-Bench.
Problem

Research questions and friction points this paper is trying to address.

slide generation
evaluation benchmark
quantitative metrics
LLM evaluation
human preference alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

SlidesGen-Bench
quantitative evaluation
slide generation
human preference alignment
computational metrics
🔎 Similar Papers
No similar papers found.