While machine learning (ML) models are being increasingly trusted to make
decisions in different and varying areas, the safety of systems using such
models has become an increasing concern. In particular, ML models are often
trained on data from potentially untrustworthy sources, providing adversaries
with the opportunity to manipulate them by inserting carefully crafted samples
into the training set. Recent work has shown that this type of attack, called a
poisoning attack, allows adversaries to insert backdoors or trojans into the
model, enabling malicious behavior with simple external backdoor triggers at
inference time and only a blackbox perspective of the model itself. Detecting
this type of attack is challenging because the unexpected behavior occurs only
when a backdoor trigger, which is known only to the adversary, is present.
Model users, either direct users of training data or users of pre-trained model
from a catalog, may not guarantee the safe operation of their ML-based system.
In this paper, we propose a novel approach to backdoor detection and removal
for neural networks. Through extensive experimental results, we demonstrate its
effectiveness for neural networks classifying text and images. To the best of
our knowledge, this is the first methodology capable of detecting poisonous
data crafted to insert backdoors and repairing the model that does not require
a verified and trusted dataset.