Image classifiers often suffer from adversarial examples, which are generated
by strategically adding a small amount of noise to input images to trick
classifiers into misclassification. Over the years, many defense mechanisms
have been proposed, and different researchers have made seemingly contradictory
claims on their effectiveness. We present an analysis of possible adversarial
models, and propose an evaluation framework for comparing different defense
mechanisms. As part of the framework, we introduce a more powerful and
realistic adversary strategy. Furthermore, we propose a new defense mechanism
called Random Spiking (RS), which generalizes dropout and introduces random
noises in the training process in a controlled manner. Evaluations under our
proposed framework suggest RS delivers better protection against adversarial
examples than many existing schemes.