Can LLMs Help You at Work? A Sandbox for Evaluating LLM Agents in Enterprise Environments

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large enterprises face significant challenges in deploying LLM-based agents due to fragmented data silos, stringent access control policies, and complex cross-departmental workflows. Method: We propose EnterpriseBench—the first systematic, enterprise-oriented evaluation framework—comprising 500 realistic tasks spanning software engineering, HR, finance, and other domains. It introduces an organization-aware task generation pipeline grounded in organizational metadata, explicitly modeling multi-source data integration, fine-grained authorization enforcement, and cross-functional workflow orchestration within a scalable, sandboxed evaluation environment. Contribution/Results: Experiments reveal that state-of-the-art LLM agents achieve only 41.8% task completion on EnterpriseBench, exposing critical limitations in handling enterprise-grade complexity. This work is the first to formally characterize core enterprise constraints—including data governance, role-based access control, and process heterogeneity—and establishes a standardized benchmark and methodology to rigorously evaluate and advance LLM agent capabilities in real-world organizational settings.

Technology Category

Application Category

📝 Abstract
Enterprise systems are crucial for enhancing productivity and decision-making among employees and customers. Integrating LLM based systems into enterprise systems enables intelligent automation, personalized experiences, and efficient information retrieval, driving operational efficiency and strategic growth. However, developing and evaluating such systems is challenging due to the inherent complexity of enterprise environments, where data is fragmented across multiple sources and governed by sophisticated access controls. We present EnterpriseBench, a comprehensive benchmark that simulates enterprise settings, featuring 500 diverse tasks across software engineering, HR, finance, and administrative domains. Our benchmark uniquely captures key enterprise characteristics including data source fragmentation, access control hierarchies, and cross-functional workflows. Additionally, we provide a novel data generation pipeline that creates internally consistent enterprise tasks from organizational metadata. Experiments with state-of-the-art LLM agents demonstrate that even the most capable models achieve only 41.8% task completion, highlighting significant opportunities for improvement in enterprise-focused AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents in complex enterprise environments with fragmented data
Developing benchmarks for enterprise tasks across multiple business domains
Assessing LLM performance on enterprise workflows with access controls
Innovation

Methods, ideas, or system contributions that make the work stand out.

EnterpriseBench benchmark simulates complex enterprise environments
Data generation pipeline creates consistent tasks from metadata
Evaluates LLM agents across 500 cross-functional enterprise tasks
🔎 Similar Papers
No similar papers found.