Differentiable Adversarial Attacks for Marked Temporal Point Processes

πŸ“… 2025-01-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Adversarial robustness of marked temporal point process (MTPP) models remains underexplored. Method: This paper proposes PERMTPPβ€”the first end-to-end differentiable permutation-perturbation attack framework for MTPPs. It jointly optimizes event reordering and additive timestamp noise while preserving sequence-level statistics and temporal structure. By introducing a differentiable permutation operator and backpropagating gradients through the MTPP likelihood function, PERMTPP transforms the combinatorially explosive reordering optimization into a continuous learning problem, with Lβ‚‚-norm constraints ensuring imperceptible perturbations. Contributions/Results: Evaluated on four real-world datasets, PERMTPP achieves significantly higher attack success rates, exhibits strong transferability against diverse defenses, and reduces inference latency. It establishes a novel paradigm for adversarial attacks and defenses on continuous-time event sequences.

Technology Category

Application Category

πŸ“ Abstract
Marked temporal point processes (MTPPs) have been shown to be extremely effective in modeling continuous time event sequences (CTESs). In this work, we present adversarial attacks designed specifically for MTPP models. A key criterion for a good adversarial attack is its imperceptibility. For objects such as images or text, this is often achieved by bounding perturbation in some fixed $L_p$ norm-ball. However, similarly minimizing distance norms between two CTESs in the context of MTPPs is challenging due to their sequential nature and varying time-scales and lengths. We address this challenge by first permuting the events and then incorporating the additive noise to the arrival timestamps. However, the worst case optimization of such adversarial attacks is a hard combinatorial problem, requiring exploration across a permutation space that is factorially large in the length of the input sequence. As a result, we propose a novel differentiable scheme PERMTPP using which we can perform adversarial attacks by learning to minimize the likelihood, while minimizing the distance between two CTESs. Our experiments on four real-world datasets demonstrate the offensive and defensive capabilities, and lower inference times of PERMTPP.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Attacks
Marked Temporal Point Processes
Sequence Variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Attacks
Marked Temporal Point Processes
PERMTPP Method
πŸ”Ž Similar Papers
No similar papers found.
Pritish Chakraborty
Pritish Chakraborty
Indian Institute of Technology, Bombay
graph machine learninggeometric deep learninginformation retrieval
V
Vinayak Gupta
University of Washington Seattle
R
R Rahul
Indian Institute of Technology Bombay
S
Srikanta J. Bedathur
Indian Institute of Technology Delhi
Abir De
Abir De
Assistant Professor, CSE, IIT Bombay
Machine LearningGraphsSetsGraph Neural Networks