Although deep neural networks (DNNs) have made rapid progress in recent
years, they are vulnerable in adversarial environments. A malicious backdoor
could be embedded in a model by poisoning the training dataset, whose intention
is to make the infected model give wrong predictions during inference when the
specific trigger appears. To mitigate the potential threats of backdoor
attacks, various backdoor detection and defense methods have been proposed.
However, the existing techniques usually require the poisoned training data or
access to the white-box model, which is commonly unavailable in practice. In
this paper, we propose a black-box backdoor detection (B3D) method to identify
backdoor attacks with only query access to the model. We introduce a
gradient-free optimization algorithm to reverse-engineer the potential trigger
for each class, which helps to reveal the existence of backdoor attacks. In
addition to backdoor detection, we also propose a simple strategy for reliable
predictions using the identified backdoored models. Extensive experiments on
hundreds of DNN models trained on several datasets corroborate the
effectiveness of our method under the black-box setting against various
backdoor attacks.