These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Marked temporal point processes (MTPPs) have been shown to be extremely
effective in modeling continuous time event sequences (CTESs). In this work, we
present adversarial attacks designed specifically for MTPP models. A key
criterion for a good adversarial attack is its imperceptibility. For objects
such as images or text, this is often achieved by bounding perturbation in some
fixed $L_p$ norm-ball. However, similarly minimizing distance norms between two
CTESs in the context of MTPPs is challenging due to their sequential nature and
varying time-scales and lengths. We address this challenge by first permuting
the events and then incorporating the additive noise to the arrival timestamps.
However, the worst case optimization of such adversarial attacks is a hard
combinatorial problem, requiring exploration across a permutation space that is
factorially large in the length of the input sequence. As a result, we propose
a novel differentiable scheme PERMTPP using which we can perform adversarial
attacks by learning to minimize the likelihood, while minimizing the distance
between two CTESs. Our experiments on four real-world datasets demonstrate the
offensive and defensive capabilities, and lower inference times of PERMTPP.