🤖 AI Summary
Current evaluations of large language model (LLM) agents predominantly rely on synthetic environments, which fail to capture the diversity, unpredictability, and stringent efficiency demands of real-world cloud service customer requests. This work proposes the first evaluation framework grounded in authentic cloud service tickets, preserving multi-turn reasoning chains and tool-call dependencies inherent in actual workflows. It introduces novel customer-centric metrics—such as the normalized efficiency index and multi-turn latency—to systematically assess agent utility across both service quality and response efficiency. Experimental results demonstrate that, despite their strong reasoning capabilities, state-of-the-art models still fall short of meeting the high-efficiency requirements of complex, real-world multi-turn customer service tasks.
📝 Abstract
The increasing agentic capabilities of Large Language Models (LLMs) have enabled their deployment in real-world applications, such as cloud services, where customer-assistant interactions exhibit high technical complexity and long-horizon dependencies, making robustness and resolution efficiency critical for customer satisfaction. However, existing benchmarks for LLM-based agents largely rely on synthetic environments that fail to capture the diversity and unpredictability of authentic customer inputs, often ignoring the resolution efficiency essential for real-world deployment. To bridge this gap, we introduce CirrusBench, a novel evaluation framework distinguished by its foundation in real-world data from authentic cloud service tickets. CirrusBench preserves the intricate multi-turn logical chains and realistic tool dependencies inherent to technical service environments. Moving beyond execution correctness, we introduce novel Customer-Centric metrics to define agent success, quantifying service quality through metrics such as the Normalized Efficiency Index and Multi-Turn Latency to explicitly measure resolution efficiency. Experiments utilizing our framework reveal that while state-of-the-art models demonstrate strong reasoning capabilities, they frequently struggle in complex, realistic multi-turn tasks and fail to meet the high-efficiency standards required for customer service, highlighting critical directions for the future development of LLM-based agents in practical technical service applications. CirrusBench evaluation framework is released at: https://github.com/CirrusAI