Learning from the Past: Adaptive Parallelism Tuning for Stream Processing Systems

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic parallelism tuning of operators in distributed stream processing remains challenging, and existing approaches neglect the joint modeling of execution history and DAG topology. Method: This paper proposes StreamTune, a framework addressing these limitations. Its core innovations are: (1) a novel DAG clustering pretraining–fine-tuning paradigm based on graph edit distance, enabling cross-job historical knowledge transfer; and (2) an operator-level bottleneck prediction model with monotonicity constraints, which for the first time accurately maps global historical experience to job-specific parallelism decisions. Results: Evaluated on Apache Flink and Timely Dataflow, StreamTune reduces reconfiguration frequency by 29.6%, decreases average parallelism by 30.8% and 83.3% respectively, and maintains end-to-end processing performance without degradation.

Technology Category

Application Category

📝 Abstract
Distributed stream processing systems rely on the dataflow model to define and execute streaming jobs, organizing computations as Directed Acyclic Graphs (DAGs) of operators. Adjusting the parallelism of these operators is crucial to handling fluctuating workloads efficiently while balancing resource usage and processing performance. However, existing methods often fail to effectively utilize execution histories or fully exploit DAG structures, limiting their ability to identity bottlenecks and determine the optimal parallelism. In this paper, we propose StreamTune, a novel approach for adaptive paralelism tuning in stream processing systems. StreamTune incorporates a pre-training and fine-tuning framework that leverages global knowledge from historical execution data for job-specific parallelism tuning. In the pre-training phase, Stream Tune clusters the historical data with Graph Edit Distance and pre-trains a Graph Neural Networkbased encoder per cluster to capture the correlation between the operator parallelism, DAG structures, and the identified operator-level bottlenecks. In the online tuning phase, StreamTune iteratively refines operator parallelism recommendations using an operator-level bottleneck prediction model enforced with a monotonic constraint, which aligns with the observed system performance behavior. Evaluation results demonstrate that StreamTune reduces reconfigurations by up to 29.6% and parallelism degrees by up to 30.8% in Apache Flink under a synthetic workload. In Timely Dataflow, StreamTune achieves up to an 83.3% reduction in parallelism degrees while maintaining comparable processing performance under the Nexmark benchmark, when compared to the state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Adaptive parallelism tuning for stream processing systems
Utilizing historical data and DAG structures for optimization
Reducing reconfigurations and parallelism degrees efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trains GNN encoders per cluster
Uses bottleneck prediction with constraints
Reduces reconfigurations and parallelism degrees
🔎 Similar Papers
No similar papers found.
Yuxing Han
Yuxing Han
Tsinghua University
Smart AgricultureArtificial IntelligenceVideoCommunication
L
Lixiang Chen
ByteDance Inc, East China Normal University
H
Haoyu Wang
ByteDance Inc
Zhanghao Chen
Zhanghao Chen
ByteDance Inc
Y
Yifan Zhang
ByteDance Inc
Chengcheng Yang
Chengcheng Yang
university of science and technology of china
databasestorage system
K
Kongzhang Hao
University of New South Wales
Z
Zhengyi Yang
University of New South Wales