Many of today's machine learning (ML) systems are built by reusing an array
of, often pre-trained, primitive models, each fulfilling distinct functionality
(e.g., feature extraction). The increasing use of primitive models
significantly simplifies and expedites the development cycles of ML systems.
Yet, because most of such models are contributed and maintained by untrusted
sources, their lack of standardization or regulation entails profound security
implications, about which little is known thus far.
In this paper, we demonstrate that malicious primitive models pose immense
threats to the security of ML systems. We present a broad class of {\em
model-reuse} attacks wherein maliciously crafted models trigger host ML systems
to misbehave on targeted inputs in a highly predictable manner. By empirically
studying four deep learning systems (including both individual and ensemble
systems) used in skin cancer screening, speech recognition, face verification,
and autonomous steering, we show that such attacks are (i) effective - the host
systems misbehave on the targeted inputs as desired by the adversary with high
probability, (ii) evasive - the malicious models function indistinguishably
from their benign counterparts on non-targeted inputs, (iii) elastic - the
malicious models remain effective regardless of various system design choices
and tuning strategies, and (iv) easy - the adversary needs little prior
knowledge about the data used for system tuning or inference. We provide
analytical justification for the effectiveness of model-reuse attacks, which
points to the unprecedented complexity of today's primitive models. This issue
thus seems fundamental to many ML systems. We further discuss potential
countermeasures and their challenges, which lead to several promising research
directions.