These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Advances in machine learning (ML) in recent years have enabled a dizzying
array of applications such as data analytics, autonomous systems, and security
diagnostics. ML is now pervasive---new systems and models are being deployed in
every domain imaginable, leading to rapid and widespread deployment of software
based inference and decision making. There is growing recognition that ML
exposes new vulnerabilities in software systems, yet the technical community's
understanding of the nature and extent of these vulnerabilities remains
limited. We systematize recent findings on ML security and privacy, focusing on
attacks identified on these systems and defenses crafted to date. We articulate
a comprehensive threat model for ML, and categorize attacks and defenses within
an adversarial framework. Key insights resulting from works both in the ML and
security communities are identified and the effectiveness of approaches are
related to structural elements of ML algorithms and the data used to train
them. We conclude by formally exploring the opposing relationship between model
accuracy and resilience to adversarial manipulation. Through these
explorations, we show that there are (possibly unavoidable) tensions between
model complexity, accuracy, and resilience that must be calibrated for the
environments in which they will be used.