ArtifactsBench: Bridging the Visual-Interactive Gap in LLM Code Generation Evaluation

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based visual interaction code evaluation benchmarks overemphasize algorithmic correctness while neglecting critical user-experience dimensions such as visual fidelity and interaction completeness. To address this gap, we propose the first automated multimodal evaluation benchmark specifically designed for LLM-generated visual interaction code. Our framework integrates programmatic rendering, temporal screenshot capture, fine-grained task checklists, and multimodal large language model (MLLM) adjudication to enable scalable, perception-aligned automatic assessment. This paradigm bridges, for the first time, the long-standing disconnect between traditional algorithmic metrics and human-centered experience evaluation. We evaluate over 30 state-of-the-art LLMs on 1,825 real-world web tasks; our automated scores achieve 94.4% agreement with the human preference ranking from WebDev Arena and exceed 90% agreement with expert pairwise judgments.

Technology Category

Application Category

📝 Abstract
The generative capabilities of Large Language Models (LLMs) are rapidly expanding from static code to dynamic, interactive visual artifacts. This progress is bottlenecked by a critical evaluation gap: established benchmarks focus on algorithmic correctness and are blind to the visual fidelity and interactive integrity that define modern user experiences. To bridge this gap, we introduce ArtifactsBench, a new benchmark and paradigm for the automated, multimodal evaluation of visual code generation. Our framework programmatically renders each generated artifact and captures its dynamic behavior through temporal screenshots. This visual evidence, alongside the source code, is then assessed by a Multimodal LLM (MLLM)-as-Judge, which is rigorously guided by a fine-grained, per-task checklist to ensure holistic and reproducible scoring. We construct a new benchmark of 1,825 diverse tasks and evaluate over 30 leading LLMs. Our automated evaluation achieves a striking 94.4% ranking consistency with WebDev Arena, the gold-standard for human preference in web development, and over 90% pairwise agreement with human experts. This establishes ArtifactsBench as the first framework to reliably automate the assessment of human-perceived quality at scale. Our analysis provides a high-resolution map of the current SOTA, revealing that generalist models often outperform domain-specific ones. We open-source ArtifactsBench, including the benchmark, evaluation harness, and baseline results at https://artifactsbenchmark.github.io/, to provide the community with a scalable and accurate tool to accelerate the development of user-centric generative models.
Problem

Research questions and friction points this paper is trying to address.

Evaluates visual fidelity in LLM-generated interactive artifacts
Automates multimodal assessment of dynamic code behavior
Bridges gap between algorithmic correctness and user experience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multimodal evaluation using MLLM-as-Judge
Programmatic rendering and temporal screenshot capture
Fine-grained per-task checklist for holistic scoring
🔎 Similar Papers
No similar papers found.
C
Chenchen Zhang
Yuhang Li
Yuhang Li
Yale University
Machine Learning
C
Can Xu
J
Jiaheng Liu
A
Ao Liu
S
Shihui Hu
D
Dengpeng Wu
G
Guanhua Huang
K
Kejiao Li
Q
Qi Yi
Ruibin Xiong
Ruibin Xiong
Tencent, Hunyuan
Large Language ModelMachine Learning
H
Haotian Zhu
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
Yuhao Jiang
Yuhao Jiang
Postdoc Researcher, EPFL
Soft RoboticsMechanism DesignDynamic ModelingControls
Y
Yue Zhang
Zenan Xu
Zenan Xu
Sun Yat-sen University
B
Bohui Zhai
G
Guoxiang He
H
Hebin Li
J
Jie Zhao
L
Le Zhang
L
Lingyun Tan
P
Pengyu Guo
X
Xianshu Pang
Yang Ruan
Yang Ruan
PhD of Computer Science, Indiana University
Dimension ReductionMapReduce