When the training data are maliciously tampered, the predictions of the
acquired deep neural network (DNN) can be manipulated by an adversary known as
the Trojan attack (or poisoning backdoor attack). The lack of robustness of
DNNs against Trojan attacks could significantly harm real-life machine learning
(ML) systems in downstream applications, therefore posing widespread concern to
their trustworthiness. In this paper, we study the problem of the Trojan
network (TrojanNet) detection in the data-scarce regime, where only the weights
of a trained DNN are accessed by the detector. We first propose a data-limited
TrojanNet detector (TND), when only a few data samples are available for
TrojanNet detection. We show that an effective data-limited TND can be
established by exploring connections between Trojan attack and
prediction-evasion adversarial attacks including per-sample attack as well as
all-sample universal attack. In addition, we propose a data-free TND, which can
detect a TrojanNet without accessing any data samples. We show that such a TND
can be built by leveraging the internal response of hidden neurons, which
exhibits the Trojan behavior even at random noise inputs. The effectiveness of
our proposals is evaluated by extensive experiments under different model
architectures and datasets including CIFAR-10, GTSRB, and ImageNet.