🤖 AI Summary
Existing data analysis methods rely on manual technique selection, hindering automation and personalized insight generation. This paper introduces the first LLM-driven analytical agent capable of autonomously interpreting user intent, matching appropriate domain-specific skills (e.g., clustering, predictive modeling, BERT-based NLP), and executing end-to-end analysis—from raw data to customized insights. Key innovations include a skill self-adaptation mechanism and an LLM-as-a-judge automated evaluation paradigm, which collectively extend beyond native LLM capabilities to enable dynamic skill expansion. The agent integrates hybrid RAG-based skill retrieval, goal-aware question generation, and documentation-driven code synthesis. Evaluated on KaggleBench and human assessments, it achieves a 48.78% preference rate—significantly outperforming baselines (27.67% absolute gain)—demonstrating substantial improvements in insight depth and domain adaptability for complex analytical tasks.
📝 Abstract
We introduce AgentAda, the first LLM-powered analytics agent that can learn and use new analytics skills to extract more specialized insights. Unlike existing methods that require users to manually decide which data analytics method to apply, AgentAda automatically identifies the skill needed from a library of analytical skills to perform the analysis. This also allows AgentAda to use skills that existing LLMs cannot perform out of the box. The library covers a range of methods, including clustering, predictive modeling, and NLP techniques like BERT, which allow AgentAda to handle complex analytics tasks based on what the user needs. AgentAda's dataset-to-insight extraction strategy consists of three key steps: (I) a question generator to generate queries relevant to the user's goal and persona, (II) a hybrid Retrieval-Augmented Generation (RAG)-based skill matcher to choose the best data analytics skill from the skill library, and (III) a code generator that produces executable code based on the retrieved skill's documentation to extract key patterns. We also introduce KaggleBench, a benchmark of curated notebooks across diverse domains, to evaluate AgentAda's performance. We conducted a human evaluation demonstrating that AgentAda provides more insightful analytics than existing tools, with 48.78% of evaluators preferring its analyses, compared to 27.67% for the unskilled agent. We also propose a novel LLM-as-a-judge approach that we show is aligned with human evaluation as a way to automate insight quality evaluation at larger scale.