Deep learning achieves outstanding results in many machine learning tasks.
Nevertheless, it is vulnerable to backdoor attacks that modify the training set
to embed a secret functionality in the trained model. The modified training
samples have a secret property, i. e., a trigger. At inference time, the secret
functionality is activated when the input contains the trigger, while the model
functions correctly in other cases. While there are many known backdoor attacks
(and defenses), deploying a stealthy attack is still far from trivial.
Successfully creating backdoor triggers depends on numerous parameters.
Unfortunately, research has not yet determined which parameters contribute most
to the attack performance.
This paper systematically analyzes the most relevant parameters for the
backdoor attacks, i.e., trigger size, position, color, and poisoning rate.
Using transfer learning, which is very common in computer vision, we evaluate
the attack on state-of-the-art models (ResNet, VGG, AlexNet, and GoogLeNet) and
datasets (MNIST, CIFAR10, and TinyImageNet). Our attacks cover the majority of
backdoor settings in research, providing concrete directions for future works.
Our code is publicly available to facilitate the reproducibility of our
results.