These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The rapid advancement of machine learning technologies raises questions about
the security of machine learning models, with respect to both training-time
(poisoning) and test-time (evasion, impersonation, and inversion) attacks.
Models performing image-related tasks, e.g. detection, and classification, are
vulnerable to adversarial attacks that can degrade their performance and
produce undesirable outcomes. This paper introduces a novel technique for
anomaly detection in images called 2DSig-Detect, which uses a
2D-signature-embedded semi-supervised framework rooted in rough path theory. We
demonstrate our method in adversarial settings for training-time and test-time
attacks, and benchmark our framework against other state of the art methods.
Using 2DSig-Detect for anomaly detection, we show both superior performance and
a reduction in the computation time to detect the presence of adversarial
perturbations in images.