Despite their tremendous success in a range of domains, deep learning systems
are inherently susceptible to two types of manipulations: adversarial inputs --
maliciously crafted samples that deceive target deep neural network (DNN)
models, and poisoned models -- adversely forged DNNs that misbehave on
pre-defined inputs. While prior work has intensively studied the two attack
vectors in parallel, there is still a lack of understanding about their
fundamental connections: what are the dynamic interactions between the two
attack vectors? what are the implications of such interactions for optimizing
existing attacks? what are the potential countermeasures against the enhanced
attacks? Answering these key questions is crucial for assessing and mitigating
the holistic vulnerabilities of DNNs deployed in realistic settings.
Here we take a solid step towards this goal by conducting the first
systematic study of the two attack vectors within a unified framework.
Specifically, (i) we develop a new attack model that jointly optimizes
adversarial inputs and poisoned models; (ii) with both analytical and empirical
evidence, we reveal that there exist intriguing "mutual reinforcement" effects
between the two attack vectors -- leveraging one vector significantly amplifies
the effectiveness of the other; (iii) we demonstrate that such effects enable a
large design spectrum for the adversary to enhance the existing attacks that
exploit both vectors (e.g., backdoor attacks), such as maximizing the attack
evasiveness with respect to various detection methods; (iv) finally, we discuss
potential countermeasures against such optimized attacks and their technical
challenges, pointing to several promising research directions.