๐ค AI Summary
Prior work lacks systematic, controlled evaluation of large language model (LLM) web agent interfaces. Method: This study conducts the first cross-interface benchmark across four simulated e-commerce environments, evaluating HTML, RAG, MCP, and NLWeb interfaces on product search, price comparison, and checkout tasks. We propose a multidimensional evaluation framework integrating F1 score, end-to-end latency, and token cost, using state-of-the-art models including GPT-4.1, GPT-5 (and GPT-5 mini), and Claude Sonnet 4. Results: RAG, MCP, and NLWeb significantly outperform raw HTMLโachieving up to 0.87 F1, reducing average task latency from 291 seconds to 50โ62 seconds, and cutting token consumption to 47Kโ140K. RAG with GPT-5 yields optimal performance. This work establishes the first empirical benchmark and methodological foundation for interface selection in LLM-based web agents.
๐ Abstract
Large language model agents are increasingly used to automate web tasks such as product search, offer comparison, and checkout. Current research explores different interfaces through which these agents interact with websites, including traditional HTML browsing, retrieval-augmented generation (RAG) over pre-crawled content, communication via Web APIs using the Model Context Protocol (MCP), and natural-language querying through the NLWeb interface. However, no prior work has compared these four architectures within a single controlled environment using identical tasks.
To address this gap, we introduce a testbed consisting of four simulated e-shops, each offering its products via HTML, MCP, and NLWeb interfaces. For each interface (HTML, RAG, MCP, and NLWeb) we develop specialized agents that perform the same sets of tasks, ranging from simple product searches and price comparisons to complex queries for complementary or substitute products and checkout processes. We evaluate the agents using GPT 4.1, GPT 5, GPT 5 mini, and Claude Sonnet 4 as underlying LLM. Our evaluation shows that the RAG, MCP and NLWeb agents outperform HTML on both effectiveness and efficiency. Averaged over all tasks, F1 rises from 0.67 for HTML to between 0.75 and 0.77 for the other agents. Token usage falls from about 241k for HTML to between 47k and 140k per task. The runtime per task drops from 291 seconds to between 50 and 62 seconds. The best overall configuration is RAG with GPT 5 achieving an F1 score of 0.87 and a completion rate of 0.79. Also taking cost into consideration, RAG with GPT 5 mini offers a good compromise between API usage fees and performance. Our experiments show the choice of the interaction interface has a substantial impact on both the effectiveness and efficiency of LLM-based web agents.