MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation benchmarks struggle to effectively assess the capability of large language models (LLMs) to generate dynamic HTML-based MiniApps with authentic interactive logic. To address this gap, this work introduces MiniAppBench, the first comprehensive benchmark focused on principle-driven, interactive MiniApp generation, along with MiniAppEval—a browser automation–based agent evaluation framework that systematically measures performance across three dimensions: intent consistency, static structure, and dynamic behavior. By leveraging a task set constructed from tens of millions of real-world generated samples and incorporating human-like exploratory testing, the approach overcomes the challenge of evaluation in scenarios lacking definitive ground-truth answers. Experimental results reveal significant shortcomings in current LLMs’ ability to produce high-quality MiniApps, while demonstrating that MiniAppEval aligns closely with human judgment, thereby establishing a reliable standard for research in interactive application generation.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth exists, we propose MiniAppEval, an agentic evaluation framework. Leveraging browser automation, it performs human-like exploratory testing to systematically assess applications across three dimensions: Intention, Static, and Dynamic. Our experiments reveal that current LLMs still face significant challenges in generating high-quality MiniApps, while MiniAppEval demonstrates high alignment with human judgment, establishing a reliable standard for future research. Our code is available in github.com/MiniAppBench.
Problem

Research questions and friction points this paper is trying to address.

Interactive HTML Applications
LLM Evaluation
MiniApps
Principle-driven Interaction
Open-ended Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

MiniAppBench
Interactive HTML Generation
Agentic Evaluation
Browser Automation
Principle-Driven Interaction
🔎 Similar Papers
No similar papers found.
Z
Zuhao Zhang
Inclusion AI, Ant Group
C
Chengyue Yu
Inclusion AI, Ant Group
Yuante Li
Yuante Li
Carnegie Mellon University
AI ScientistMulti-Agent SystemLarge Language ModelsData MiningAI For Finance
Chenyi Zhuang
Chenyi Zhuang
AIST, AIRC
machine learning
Linjian Mo
Linjian Mo
Ant Group
S
Shuai Li
Shanghai Jiao Tong University