For the time being, mobile devices employ implicit authentication mechanisms,
namely, unlock patterns, PINs or biometric-based systems such as fingerprint or
face recognition. While these systems are prone to well-known attacks, the
introduction of an explicit and unobtrusive authentication layer can greatly
enhance security. In this study, we focus on deep learning methods for explicit
authentication based on motion sensor signals. In this scenario, attackers
could craft adversarial examples with the aim of gaining unauthorized access
and even restraining a legitimate user to access his mobile device. To our
knowledge, this is the first study that aims at quantifying the impact of
adversarial attacks on machine learning models used for user identification
based on motion sensors. To accomplish our goal, we study multiple methods for
generating adversarial examples. We propose three research questions regarding
the impact and the universality of adversarial examples, conducting relevant
experiments in order to answer our research questions. Our empirical results
demonstrate that certain adversarial example generation methods are specific to
the attacked classification model, while others tend to be generic. We thus
conclude that deep neural networks trained for user identification tasks based
on motion sensors are subject to a high percentage of misclassification when
given adversarial input.