Recent studies have demonstrated that machine learning approaches like deep
neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and
imperceptible perturbations of the data are able to change the result of deep
neural networks. Leveraging vulnerable machine learning methods raises many
concerns especially in domains where security is an important factor.
Therefore, it is crucial to design defense mechanisms against adversarial
attacks. For the task of image classification, unnoticeable perturbations
mostly occur in the high-frequency spectrum of the image. In this paper, we
utilize tensor decomposition techniques as a preprocessing step to find a
low-rank approximation of images which can significantly discard high-frequency
perturbations. Recently a defense framework called Shield could "vaccinate"
Convolutional Neural Networks (CNN) against adversarial examples by performing
random-quality JPEG compressions on local patches of images on the ImageNet
dataset. Our tensor-based defense mechanism outperforms the SLQ method from
Shield by 14% against FastGradient Descent (FGSM) adversarial attacks, while
maintaining comparable speed.