Machine learning (ML) models are proving to be vulnerable to a variety of
attacks that allow the adversary to learn sensitive information, cause
mispredictions, and more. While these attacks have been extensively studied,
current research predominantly focuses on analyzing each attack type
individually. In practice, however, adversaries may employ multiple attack
strategies simultaneously rather than relying on a single approach. This
prompts a crucial yet underexplored question: When the adversary has multiple
attacks at their disposal, are they able to mount or amplify the effect of one
attack with another? In this paper, we take the first step in studying the
strategic interactions among different attacks, which we define as attack
compositions. Specifically, we focus on four well-studied attacks during the
model's inference phase: adversarial examples, attribute inference, membership
inference, and property inference. To facilitate the study of their
interactions, we propose a taxonomy based on three stages of the attack
pipeline: preparation, execution, and evaluation. Using this taxonomy, we
identify four effective attack compositions, such as property inference
assisting attribute inference at its preparation level and adversarial examples
assisting property inference at its execution level. We conduct extensive
experiments on the attack compositions using three ML model architectures and
three benchmark image datasets. Empirical results demonstrate the effectiveness
of these four attack compositions. We implement and release a modular reusable
toolkit, COAT. Arguably, our work serves as a call for researchers and
practitioners to consider advanced adversarial settings involving multiple
attack strategies, aiming to strengthen the security and robustness of AI
systems.