Deep learning models are vulnerable to adversarial examples crafted by
applying human-imperceptible perturbations on benign inputs. However, under the
black-box setting, most existing adversaries often have a poor transferability
to attack other defense models. In this work, from the perspective of regarding
the adversarial example generation as an optimization process, we propose two
new methods to improve the transferability of adversarial examples, namely
Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant
attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into
the iterative attacks so as to effectively look ahead and improve the
transferability of adversarial examples. While SIM is based on our discovery on
the scale-invariant property of deep learning models, for which we leverage to
optimize the adversarial perturbations over the scale copies of the input
images so as to avoid "overfitting" on the white-box model being attacked and
generate more transferable adversarial examples. NI-FGSM and SIM can be
naturally integrated to build a robust gradient-based attack to generate more
transferable adversarial examples against the defense models. Empirical results
on ImageNet dataset demonstrate that our attack methods exhibit higher
transferability and achieve higher attack success rates than state-of-the-art
gradient-based attacks.