Data poisoning and backdoor attacks manipulate training data in order to
cause models to fail during inference. A recent survey of industry
practitioners found that data poisoning is the number one concern among threats
ranging from model stealing to adversarial attacks. However, it remains unclear
exactly how dangerous poisoning methods are and which ones are more effective
considering that these methods, even ones with identical objectives, have not
been tested in consistent or realistic settings. We observe that data poisoning
and backdoor attacks are highly sensitive to variations in the testing setup.
Moreover, we find that existing methods may not generalize to realistic
settings. While these existing works serve as valuable prototypes for data
poisoning, we apply rigorous tests to determine the extent to which we should
fear them. In order to promote fair comparison in future work, we develop
standardized benchmarks for data poisoning and backdoor attacks.