🤖 AI Summary
This work addresses the challenge of predicting process model evolution under highly sparse and heterogeneous direct-follows (DF) time series in business process control-flow structures, where conventional machine learning models suffer from limited predictive performance. We present the first systematic investigation of Time Series Foundation Models (TSFMs) for process evolution forecasting, proposing a zero-shot transfer framework leveraging pre-trained TSFMs—requiring no domain-specific fine-tuning—to model DF dynamics. Experiments demonstrate that our approach significantly outperforms traditional models in MAE and RMSE; achieves strong generalization and high data efficiency even in zero-shot settings, with marginal gains from subsequent fine-tuning; and exhibits particularly pronounced advantages on small-scale or high-complexity event logs. Our core contribution is the empirical validation of zero-shot feasibility of TSFMs for process prediction, establishing a novel paradigm for process intelligence in low-resource scenarios.
📝 Abstract
Process Model Forecasting (PMF) aims to predict how the control-flow structure of a process evolves over time by modeling the temporal dynamics of directly-follows (DF) relations, complementing predictive process monitoring that focuses on single-case prefixes. Prior benchmarks show that machine learning and deep learning models provide only modest gains over statistical baselines, mainly due to the sparsity and heterogeneity of the DF time series. We investigate Time Series Foundation Models (TSFMs), large pre-trained models for generic time series, as an alternative for PMF. Using DF time series derived from real-life event logs, we compare zero-shot use of TSFMs, without additional training, with fine-tuned variants adapted on PMF-specific data. TSFMs generally achieve lower forecasting errors (MAE and RMSE) than traditional and specialized models trained from scratch on the same logs, indicating effective transfer of temporal structure from non-process domains. While fine-tuning can further improve accuracy, the gains are often small and may disappear on smaller or more complex datasets, so zero-shot use remains a strong default. Our study highlights the generalization capability and data efficiency of TSFMs for process-related time series and, to the best of our knowledge, provides the first systematic evaluation of temporal foundation models for PMF.