Deep neural network image classifiers are reported to be susceptible to
adversarial evasion attacks, which use carefully crafted images created to
mislead a classifier. Recently, various kinds of adversarial attack methods
have been proposed, most of which focus on adding small perturbations to input
images. Despite the success of existing approaches, the way to generate
realistic adversarial images with small perturbations remains a challenging
problem. In this paper, we aim to address this problem by proposing a novel
adversarial method, which generates adversarial examples by imposing not only
perturbations but also spatial distortions on input images, including scaling,
rotation, shear, and translation. As humans are less susceptible to small
spatial distortions, the proposed approach can produce visually more realistic
attacks with smaller perturbations, able to deceive classifiers without
affecting human predictions. We learn our method by amortized techniques with
neural networks and generate adversarial examples efficiently by a forward pass
of the networks. Extensive experiments on attacking different types of
non-robustified classifiers and robust classifiers with defence show that our
method has state-of-the-art performance in comparison with advanced attack
parallels.