🤖 AI Summary
This work addresses the challenge of evaluating commercial analytics agents in multi-step insight generation. We introduce InsightBench, the first end-to-end benchmark comprising 100 real-world business datasets and human-annotated ground-truth insights, requiring agents to autonomously execute the full pipeline: question formulation, data analysis, and derivation of actionable insights. We propose a novel multi-step insight generation evaluation paradigm and an open-source dual-path assessment framework built on LLaMA-3, incorporating a quality assurance process that jointly enforces goal clarity and analytical depth. Experiments demonstrate that our AgentPoirot—integrating Pandas, SQL, and natural language reasoning—significantly outperforms single-step baselines (e.g., Pandas Agent). Results validate the feasibility of open-weight large language models for complex commercial analytics tasks. All datasets, code, and evaluation tools are publicly released.
📝 Abstract
Data analytics is essential for extracting valuable insights from data that can assist organizations in making effective decisions. We introduce InsightBench, a benchmark dataset with three key features. First, it consists of 100 datasets representing diverse business use cases such as finance and incident management, each accompanied by a carefully curated set of insights planted in the datasets. Second, unlike existing benchmarks focusing on answering single queries, InsightBench evaluates agents based on their ability to perform end-to-end data analytics, including formulating questions, interpreting answers, and generating a summary of insights and actionable steps. Third, we conducted comprehensive quality assurance to ensure that each dataset in the benchmark had clear goals and included relevant and meaningful questions and analysis. Furthermore, we implement a two-way evaluation mechanism using LLaMA-3 as an effective, open-source evaluator to assess agents' ability to extract insights. We also propose AgentPoirot, our baseline data analysis agent capable of performing end-to-end data analytics. Our evaluation on InsightBench shows that AgentPoirot outperforms existing approaches (such as Pandas Agent) that focus on resolving single queries. We also compare the performance of open- and closed-source LLMs and various evaluation strategies. Overall, this benchmark serves as a testbed to motivate further development in comprehensive automated data analytics and can be accessed here: https://github.com/ServiceNow/insight-bench.