Chipmink: Efficient Delta Identification for Massive Object Graph

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing full-snapshot-based persistence mechanisms (e.g., Pickle/Dill) suffer from low efficiency for large-scale dynamic object graphs in data science—such as computational notebooks and batch scripts—due to redundant storage of unchanged objects and inability to track dirty objects across heterogeneous memory tiers (CPU, GPU, shared memory, remote nodes). Method: This paper introduces Chipmunk, a graph-aware object storage system. Its core innovation is graph-structured dynamic object grouping (“pods”), coupled with a reference-aware cost model and multi-tier storage interfaces that emulate DBMS buffer management to enable fine-grained dirty-object identification and incremental persistence. Contribution/Results: Under realistic data science workloads, Chipmunk reduces storage overhead by 36.5× and improves persistence throughput by 12.4× over the best baseline, while maintaining generality and scalability across diverse memory architectures.

Technology Category

Application Category

📝 Abstract
Ranging from batch scripts to computational notebooks, modern data science tools rely on massive and evolving object graphs that represent structured data, models, plots, and more. Persisting these objects is critical, not only to enhance system robustness against unexpected failures but also to support continuous, non-linear data exploration via versioning. Existing object persistence mechanisms (e.g., Pickle, Dill) rely on complete snapshotting, often redundantly storing unchanged objects during execution and exploration, resulting in significant inefficiency in both time and storage. Unlike DBMSs, data science systems lack centralized buffer managers that track dirty objects. Worse, object states span various locations such as memory heaps, shared memory, GPUs, and remote machines, making dirty object identification fundamentally more challenging. In this work, we propose a graph-based object store, named Chipmink, that acts like the centralized buffer manager. Unlike static pages in DBMSs, persistence units in Chipmink are dynamically induced by partitioning objects into appropriate subgroups (called pods), minimizing expected persistence costs based on object sizes and reference structure. These pods effectively isolate dirty objects, enabling efficient partial persistence. Our experiments show that Chipmink is general, supporting libraries that rely on shared memory, GPUs, and remote objects. Moreover, Chipmink achieves up to 36.5x smaller storage sizes and 12.4x faster persistence than the best baselines in real-world notebooks and scripts.
Problem

Research questions and friction points this paper is trying to address.

Inefficient object persistence in data science systems using complete snapshotting
Lack of centralized buffer managers for tracking dirty objects across locations
Redundant storage of unchanged objects causing time and space inefficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based object store with dynamic pod partitioning
Isolates dirty objects for efficient partial persistence
Supports shared memory, GPU and remote object libraries
🔎 Similar Papers
No similar papers found.