PBench: Workload Synthesizer with Real Statistics for Cloud Analytics Benchmarking

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cloud analytics system evaluation commonly relies on static benchmarks (e.g., TPC-H/TPC-DS), which fail to capture key statistical characteristics of real production workloads—such as performance metric distributions, operator frequencies, and temporal query patterns. Meanwhile, existing real-world execution traces lack reproducible SQL queries and database metadata. Method: We formulate the novel problem of *statistically grounded synthetic workload generation*, introducing three core techniques: (1) multi-objective optimization–driven component selection, (2) progressive timestamp modeling, and (3) LLM-enhanced statistical fidelity augmentation—all operating via recombination of benchmark queries and database objects. Contribution/Results: Evaluated on real cloud traces, our approach reduces statistical approximation error by up to 6× over state-of-the-art methods, significantly improving evaluation authenticity, reproducibility, and ecosystem compatibility.

Technology Category

Application Category

📝 Abstract
Cloud service providers commonly use standard benchmarks like TPC-H and TPC-DS to evaluate and optimize cloud data analytics systems. However, these benchmarks rely on fixed query patterns and fail to capture the real execution statistics of production cloud workloads. Although some cloud database vendors have recently released real workload traces, these traces alone do not qualify as benchmarks, as they typically lack essential components like the original SQL queries and their underlying databases. To overcome this limitation, this paper introduces a new problem of workload synthesis with real statistics, which aims to generate synthetic workloads that closely approximate real execution statistics, including key performance metrics and operator distributions, in real cloud workloads. To address this problem, we propose PBench, a novel workload synthesizer that constructs synthetic workloads by judiciously selecting and combining workload components (i.e., queries and databases) from existing benchmarks. This paper studies the key challenges in PBench. First, we address the challenge of balancing performance metrics and operator distributions by introducing a multi-objective optimization-based component selection method. Second, to capture the temporal dynamics of real workloads, we design a timestamp assignment method that progressively refines workload timestamps. Third, to handle the disparity between the original workload and the candidate workload, we propose a component augmentation approach that leverages large language models (LLMs) to generate additional workload components while maintaining statistical fidelity. We evaluate PBench on real cloud workload traces, demonstrating that it reduces approximation error by up to 6x compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing workloads with real cloud execution statistics
Balancing performance metrics and operator distributions accurately
Capturing temporal dynamics in synthetic workload generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective optimization for component selection
Timestamp assignment for temporal dynamics
LLM-based component augmentation for fidelity
🔎 Similar Papers