🤖 AI Summary
Existing automated data science benchmarks overlook core data governance challenges—ensuring data quality and regulatory compliance. This paper introduces GovBench, the first evaluation benchmark designed for realistic data governance workflows, comprising 150 high-quality tasks with controllable noise levels. We further propose DataGovAgent, a novel framework integrating constraint-driven planning, retrieval-augmented generation (RAG), sandbox-based feedback-driven debugging, and multi-stage task decomposition to optimize end-to-end reliability. Key innovations include a reverse-goal noise generation method and fine-grained reliability metrics. Experiments demonstrate that DataGovAgent improves average task scores on complex governance tasks from 39.7 to 54.9 and reduces debugging iterations by 77.9%, significantly outperforming general-purpose baselines. Our work establishes a new paradigm for trustworthy LLM deployment in data governance.
📝 Abstract
Data governance ensures data quality, security, and compliance through policies and standards, a critical foundation for scaling modern AI development. Recently, large language models (LLMs) have emerged as a promising solution for automating data governance by translating user intent into executable transformation code. However, existing benchmarks for automated data science often emphasize snippet-level coding or high-level analytics, failing to capture the unique challenge of data governance: ensuring the correctness and quality of the data itself. To bridge this gap, we introduce GovBench, a benchmark featuring 150 diverse tasks grounded in real-world scenarios, built on data from actual cases. GovBench employs a novel "reversed-objective" methodology to synthesize realistic noise and utilizes rigorous metrics to assess end-to-end pipeline reliability. Our analysis on GovBench reveals that current models struggle with complex, multi-step workflows and lack robust error-correction mechanisms. Consequently, we propose DataGovAgent, a framework utilizing a Planner-Executor-Evaluator architecture that integrates constraint-based planning, retrieval-augmented generation, and sandboxed feedback-driven debugging. Experimental results show that DataGovAgent significantly boosts the Average Task Score (ATS) on complex tasks from 39.7 to 54.9 and reduces debugging iterations by over 77.9 percent compared to general-purpose baselines.