In-Memory Indexing and Querying of Provenance in Data Preparation Pipelines

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of fine-grained provenance capture and efficient querying in data preparation workflows, this paper proposes a tensor-based in-memory indexing mechanism. The method innovatively integrates both backward- and forward-looking provenance information, employing an enhanced tensor model to explicitly encode record-level and attribute-level input–output mappings, thereby enabling expressive lineage analysis. Unlike conventional approaches, its memory-efficient index design substantially reduces storage overhead while accelerating diverse provenance queries—including origin tracing, impact analysis, and dependency exploration. Experimental evaluation on real-world and synthetic datasets demonstrates that our approach consistently outperforms state-of-the-art baselines in both query latency and memory footprint. The proposed solution thus provides scalable, high-performance provenance support for critical downstream tasks such as debugging, model interpretability, fairness auditing, and data quality diagnostics.

Technology Category

Application Category

📝 Abstract
Data provenance has numerous applications in the context of data preparation pipelines. It can be used for debugging faulty pipelines, interpreting results, verifying fairness, and identifying data quality issues, which may affect the sources feeding the pipeline execution. In this paper, we present an indexing mechanism to efficiently capture and query pipeline provenance. Our solution leverages tensors to capture fine-grained provenance of data processing operations, using minimal memory. In addition to record-level lineage relationships, we provide finer granularity at the attribute level. This is achieved by augmenting tensors, which capture retrospective provenance, with prospective provenance information, drawing connections between input and output schemas of data processing operations. We demonstrate how these two types of provenance (retrospective and prospective) can be combined to answer a broad range of provenance queries efficiently, and show effectiveness through evaluation exercises using both real and synthetic data.
Problem

Research questions and friction points this paper is trying to address.

Efficiently capturing and querying data pipeline provenance
Combining retrospective and prospective provenance for queries
Minimizing memory usage while tracking fine-grained lineage
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-memory indexing for efficient provenance capture
Tensor-based fine-grained provenance with minimal memory
Combining retrospective and prospective provenance for queries
🔎 Similar Papers
No similar papers found.
Khalid Belhajjame
Khalid Belhajjame
PSL, Université Paris-Dauphine, LAMSADE
Data preparationProvenanceScientific workflowsKnowledge graphs
H
Haroun Mezrioui
Univ. Paris-Dauphine – Tunis, Tunis, Tunisia
Y
Yuyan Zhao
LAMSADE, Univ. Paris-Dauphine, PSL, Paris, France