Machine learning models in the wild have been shown to be vulnerable to
Trojan attacks during training. Although many detection mechanisms have been
proposed, strong adaptive attackers have been shown to be effective against
them. In this paper, we aim to answer the questions considering an intelligent
and adaptive adversary: (i) What is the minimal amount of instances required to
be Trojaned by a strong attacker? and (ii) Is it possible for such an attacker
to bypass strong detection mechanisms?
We provide an analytical characterization of adversarial capability and
strategic interactions between the adversary and detection mechanism that take
place in such models. We characterize adversary capability in terms of the
fraction of the input dataset that can be embedded with a Trojan trigger. We
show that the loss function has a submodular structure, which leads to the
design of computationally efficient algorithms to determine this fraction with
provable bounds on optimality. We propose a Submodular Trojan algorithm to
determine the minimal fraction of samples to inject a Trojan trigger. To evade
detection of the Trojaned model, we model strategic interactions between the
adversary and Trojan detection mechanism as a two-player game. We show that the
adversary wins the game with probability one, thus bypassing detection. We
establish this by proving that output probability distributions of a Trojan
model and a clean model are identical when following the Min-Max (MM) Trojan
algorithm.
We perform extensive evaluations of our algorithms on MNIST, CIFAR-10, and
EuroSAT datasets. The results show that (i) with Submodular Trojan algorithm,
the adversary needs to embed a Trojan trigger into a very small fraction of
samples to achieve high accuracy on both Trojan and clean samples, and (ii) the
MM Trojan algorithm yields a trained Trojan model that evades detection with
probability 1.