A recent trojan attack on deep neural network (DNN) models is one insidious
variant of data poisoning attacks. Trojan attacks exploit an effective backdoor
created in a DNN model by leveraging the difficulty in interpretability of the
learned model to misclassify any inputs signed with the attacker's chosen
trojan trigger. Since the trojan trigger is a secret guarded and exploited by
the attacker, detecting such trojan inputs is a challenge, especially at
run-time when models are in active operation. This work builds STRong
Intentional Perturbation (STRIP) based run-time trojan attack detection system
and focuses on vision system. We intentionally perturb the incoming input, for
instance by superimposing various image patterns, and observe the randomness of
predicted classes for perturbed inputs from a given deployed model---malicious
or benign. A low entropy in predicted classes violates the input-dependence
property of a benign model and implies the presence of a malicious input---a
characteristic of a trojaned input. The high efficacy of our method is
validated through case studies on three popular and contrasting datasets:
MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of
less than 1%, given a preset false rejection rate (FRR) of 1%, for different
types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result
of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a
number of trojan attack variants and adaptive attacks.