Measuring Data Science Automation: A Survey of Evaluation Tools for AI Assistants and Agents

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies three structural deficiencies in current LLM evaluation for data science: (1) imbalanced task coverage, neglecting data management and exploratory analysis; (2) oversimplified human–AI collaboration models, lacking intermediate autonomy levels; and (3) a narrow automation paradigm that prioritizes human replacement over task-transformation-driven capability advancement. To address these, we propose a novel “task-transformation-driven automation” paradigm and introduce a three-dimensional evaluation framework—encompassing goal-directedness, collaboration intensity, and capability leap. Through systematic literature review and cross-platform tool analysis of 72 mainstream benchmarks, we find only 11% support data cleaning and exploration, and none quantify dynamic collaboration intensity. Our analysis establishes medium-autonomy collaboration as a critical evolutionary pathway, advocating for more comprehensive, human-centered, and evolvable AI evaluation standards.

Technology Category

Application Category

📝 Abstract
Data science aims to extract insights from data to support decision-making processes. Recently, Large Language Models (LLMs) are increasingly used as assistants for data science, by suggesting ideas, techniques and small code snippets, or for the interpretation of results and reporting. Proper automation of some data-science activities is now promised by the rise of LLM agents, i.e., AI systems powered by an LLM equipped with additional affordances--such as code execution and knowledge bases--that can perform self-directed actions and interact with digital environments. In this paper, we survey the evaluation of LLM assistants and agents for data science. We find (1) a dominant focus on a small subset of goal-oriented activities, largely ignoring data management and exploratory activities; (2) a concentration on pure assistance or fully autonomous agents, without considering intermediate levels of human-AI collaboration; and (3) an emphasis on human substitution, therefore neglecting the possibility of higher levels of automation thanks to task transformation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM assistants and agents in data science tasks
Assessing automation levels in human-AI collaboration for data science
Identifying gaps in data management and exploratory activity evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents with code execution and knowledge bases
Survey of evaluation tools for AI assistants
Focus on human-AI collaboration levels
🔎 Similar Papers
No similar papers found.
I
Irene Testini
Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK
J
Jos'e Hern'andez-Orallo
Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK, Valencian Research Institute for Artificial Intelligence (VRAIN), Universitat Politècnica de València, Spain
Lorenzo Pacchiardi
Lorenzo Pacchiardi
Research Associate, University of Cambridge
Large Language ModelsAI evaluationAI policyBayesian InferenceLikelihood-Free Inference