🤖 AI Summary
To address the low sampling efficiency of Transformer-based temporal point processes (TPPs), which hinders real-time sequential event generation, this paper introduces speculative decoding—originally developed for autoregressive language models—into the TPP sampling framework for the first time. Inspired by the structural alignment between thinning algorithms and speculative decoding in their “propose–verify” paradigm, we propose a draft-verify parallel sampling mechanism: a lightweight draft model generates candidate event sequences, while a large target model concurrently validates and refines them in batches. Crucially, our method preserves the exact event-time distribution of the original TPP without approximation or reparameterization. Evaluations across multiple synthetic and real-world datasets demonstrate 2×–6× end-to-end speedup, significantly enhancing the practical feasibility of efficient long-sequence generation in deployment scenarios.
📝 Abstract
We propose TPP-SD, a novel approach that accelerates Transformer temporal point process (TPP) sampling by adapting speculative decoding (SD) techniques from language models. By identifying the structural similarities between thinning algorithms for TPPs and speculative decoding for language models, we develop an efficient sampling framework that leverages a smaller draft model to generate multiple candidate events, which are then verified by the larger target model in parallel. TPP-SD maintains the same output distribution as autoregressive sampling while achieving significant acceleration. Experiments on both synthetic and real datasets demonstrate that our approach produces samples from identical distributions as standard methods, but with 2-6$ imes$ speedup. Our ablation studies analyze the impact of hyperparameters such as draft length and draft model size on sampling efficiency. TPP-SD bridges the gap between powerful Transformer TPP models and the practical need for rapid sequence sampling.