These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning models are frequently used to solve complex security
problems, as well as to make decisions in sensitive situations like guiding
autonomous vehicles or predicting financial market behaviors. Previous efforts
have shown that numerous machine learning models were vulnerable to adversarial
manipulations of their inputs taking the form of adversarial samples. Such
inputs are crafted by adding carefully selected perturbations to legitimate
inputs so as to force the machine learning model to misbehave, for instance by
outputting a wrong class if the machine learning task of interest is
classification. In fact, to the best of our knowledge, all previous work on
adversarial samples crafting for neural network considered models used to solve
classification tasks, most frequently in computer vision applications. In this
paper, we contribute to the field of adversarial machine learning by
investigating adversarial input sequences for recurrent neural networks
processing sequential data. We show that the classes of algorithms introduced
previously to craft adversarial samples misclassified by feed-forward neural
networks can be adapted to recurrent neural networks. In a experiment, we show
that adversaries can craft adversarial sequences misleading both categorical
and sequential recurrent neural networks.