Machine learning (ML) models deployed in many safety- and business-critical
systems are vulnerable to exploitation through adversarial examples. A large
body of academic research has thoroughly explored the causes of these blind
spots, developed sophisticated algorithms for finding them, and proposed a few
promising defenses. A vast majority of these works, however, study standalone
neural network models. In this work, we build on our experience evaluating the
security of a machine learning software product deployed on a large scale to
broaden the conversation to include a systems security view of these
vulnerabilities. We describe novel challenges to implementing systems security
best practices in software with ML components. In addition, we propose a list
of short-term mitigation suggestions that practitioners deploying machine
learning modules can use to secure their systems. Finally, we outline
directions for new research into machine learning attacks and defenses that can
serve to advance the state of ML systems security.