🤖 AI Summary
Existing Pandas APIs lack a dedicated benchmark, hindering systematic evaluation of performance and API coverage. Method: We introduce PandasBench—the first specialized benchmark suite for Pandas—built from 102 real-world Jupyter notebooks (3,721 code cells) and featuring four novel mechanisms: real-code-driven API coverage analysis, noise-aware code cleaning, multi-backend compatibility adaptation, and fine-grained, non-uniform input scaling—a first in Pandas benchmarking. Contribution/Results: Leveraging PandasBench, we conduct the largest empirical evaluation to date across four acceleration libraries—Modin, Dask, Koalas, and Dias. Results show only Modin achieves speedup in 8% of notebooks; Dias attains the highest speedup (54%) but introduces semantic errors; significant disparities exist across systems in both API support and correctness. PandasBench establishes a standardized infrastructure for rigorous performance assessment and optimization in the Pandas ecosystem.
📝 Abstract
The Pandas API has been central to the success of pandas and its alternatives. Despite its importance, there is no benchmark for it, and we argue that we cannot repurpose existing benchmarks (from other domains) for the Pandas API. In this paper, we introduce requirements that are necessary for a Pandas API enchmark, and present the first benchmark that fulfills them: PandasBench. We argue that it should evaluate the real-world coverage of a technique. Yet, real-world coverage is not sufficient for a useful benchmark, and so we also: cleaned it from irrelevant code, adapted it for benchmark usage, and introduced input scaling. We claim that uniform scaling used in other benchmarks (e.g., TPC-H) is too coarse-grained for PandasBench, and use a non-uniform scaling scheme. PandasBench is the largest Pandas API benchmark to date, with 102 notebooks and 3,721 cells. We used PandasBench to evaluate Modin, Dask, Koalas, and Dias. This is the largest-scale evaluation of all these techniques to date. Prior works report significant speedups using constrained benchmarks, but we show that on a larger benchmark with real-world code, the most notebooks that got a speedup were 8/102 (~8%) for Modin, and 0 for both Koalas and Dask. Dias showed speedups in up to 55 notebooks (~54%), but it rewrites code incorrectly in certain cases, which had not been observed in prior work. Second, we identified many failures: Modin runs only 72/102 (~70%) notebooks, Dask 4 (~4%), Koalas 10 (~10%), and Dias 97 (95%).